Chat GPT

It was thought more than a century ago that, given the vast improvements in technology that were occurring, people of the future would face the dilemma of what to do with all the spare time that they would have, rather than worry about what job to do and for what pay. This prognostication came from Keynes himself, whose musings on a future economy would have seemed quaint to those who only recently began the movement of the Great Resignation, having realized after the Covid-19 pandemic that there was more to life than a stifling career. Before Keynes, it was Marx who saw the end goal of Communism as one where the people would break the shackles of capitalism and cease to exchange their labor for subsistence, and instead pursue the higher and more perfect ideals of life.

It was in large part the neoliberalization of society after the 1980s that kept such dreams from being realized, as workers were continually put into greater precarity, and the gains of productivity from technology were largely realized at the top, while the majority continued to struggle to eke out scant more than the rudiments of an existence. They continued, much as before, to exchange their labor for capital, even as many transitioned into forms of labor that were more “creative” in nature, requiring higher-order thinking, advanced skills, and inputs that did not have a material output per se. 

Under neoliberalism, it appeared that a creative class would have to (at least pretend to) enjoy exchanging their intellectual labor for the trinkets that the 1% would throw at them. It was the lame and boorish blue-collar worker, they thought, who was redundant in such a paradigm, and thus would need to upskill to catch up to the smart and savvy white-collar worker, who would be the man (and more likely, woman) of the future, toiling an intellectual toil at the service of the 1%.  Blue-collar work was being shipped overseas, particularly to the Far East, where labor costs were lower and transport was subsidized by cheap global energy. Everyone would be better off from this trade, neoliberals said, as America would turn into one giant silicon valley where cute and snarky techpreneurs would *innovate* their way from garages to the stars.

At the same time, the “B*S jobs” paradigm (as coined by David Graeber) also ballooned to catastrophic proportions where, if one were to ask a person if their job could and should not exist, they themselves would acknowledge the futility of their work. Yet there were unspoken rules regarding the articulation of this observation that one’s job was a B*S job, done for the sake of being done. This was seen by some initially as a paradox, since it was the socialist countries where “we pretended to work, and they pretended to pay us,” not a capitalist symptom; but neoliberal capitalism was in fact rife with the symptom identified by Graeber. As such, between these smokescreens of neoliberalism, the last redoubt was thought to be that of the knowledge worker, immune from the vagaries of the “trashy” blue-collar life, which only a boor obsessed with Marx would have appreciated for the productive  nature of blue-collar work. Mechanization had, in any case, reduced the need for labor participation in factory conditions,  and it was in the air-conditioned office where young workers would wear flip-flops sitting at their hip desks that mankind would achieve its pinnacle, neoliberalism said.

Zizek called the proponents of such mantras the fukuyamaists (as in Francis Fukuyama), who shared an “end of history” mindset in which the current system was the most ideal one, and what really only needed to be done was to tinker at the margins while leaving the system intact. The fukuyamaists were (and are) essentially apologists for the billionaire class, who have taken essentially all of the wealth produced by neoliberalism. They would continue to do so indefinitely, with an armada of white-collar hipster-janissaries at their command, were it not for the torrent of disruptions caused by artificial intelligence in late 2022. The nature of the rapid dissemination of generative AI technologies caused tremors in social media, with many feeling that the vast pace of growth in AI capabilities would change the nature of white-collar work quicker than was thought, and that blue-collar work would in fact not shift as quickly as feared, particularly as “re-shoring” of jobs to formerly industrial countries began with a protectionist frenzy after 2016.

Although the work behind using large language models (LLMs) and the applied forms of generative AI usage were being constructed much earlier, it was the large tech giants that held this technology back from public usage until 2022. OpenAI broke ranks with its tech bro peers and launched a slew of apps including DALL-E and ChatGPT-3 (with GPT-4 soon after). In the meantime, a host of other AI apps also emerged in fields such as text-to-image, voice-to-text, and other cross-modal platforms, with initially stupefying results. The capabilities that these AIs projected raised alarm bells among the incumbent tech giants, who had held their own capabilities restrained as part of an oligopolistic scheduling of releases. They inserted “AI” wherever they could, with much less impressive results, such as in search engines. At the same time, many hocus-pocus buzzwords such as Web3 and the Metaverse, which had been blared across to shareholders for several years, fell by the wayside and made room for the more “substantive” results promulgated by OpenAI and other apps such as stable diffusion and midjourney.

The target of such work included copywriters, graphic designers, and other professions occupied by a youthful cohort of early Gen Z and late millennials. They required a creative bent and were part of the marketing superstructure of capitalism. But for a fraction of the cost and with considerably less effort and time, generative AI apps offered alternatives in some (but not all) cases. This raised alarm bells among such white-collar individuals, who had been served the short end of the stick in late capitalism in any case, with a housing bubble, the 2008 crisis, work precarity, weak labor power, the Covid-19 pandemic, and the post-Covid financial crisis all adding to their jittery coming of age. Whereas some had sought out braver alternatives such as blockchain-based investments or YOLO capitalism, as far as their incomes were concerned, many young white-collar began to find themselves imminently replaceable, and this compounded their latent anxiety. They did not have enough capital to withstand the emergence of generative AI: be it in terms of work experience, distinctive CVs, education, or financial savings.

The philosopher Nassim Taleb put it boldly in a tweet: “Let me be blunt. Those who are afraid of AI feel deep down that they are impostors & have no edge. If you have a 1) clear mind, 2) a deep, not just cosmetic, understanding of your specialty, 3) and/or are original enough to reinvent yourself when needed, AI will be your friend.” But many among this young cohort do not have a clear mind, a deep understanding of their specialty, or the requisite originality (by definition, originality is rare, and thus original).  So in such conditions, do the fukuyamists have any clear roadmap of what to do with the formerly-comfortable creative class? It appears that they do not, and their frightened reactions are telling, as in the moratorium they hailed up in AI research, which is neither implementable nor feasible. Their supposed moratorium signals the urgency they may face, whether as competitors to the leading disruptive AI players or as “concerned citizens.”

Governments have been deeply involved with AI in a variety of applications, including in defense technologies, but generative AI is somewhat different in that it poses a human security concern in terms of the massive redundancy it creates in the creative class. The threat may be quite real. An appraisal of ChatGPT by Google engineers determined that it would be, in theory, suitable as an entry-level coder. Amazon employees who evaluated ChatGPT determined that it did quite well across several functional areas: “very good” at answering customer support questions, “great” at building training documents, and “very strong” at responding to queries of corporate strategy. There are kinks in the program as it stands, the most egregious among which include: making stuff up (misinformation), basic arithmetic errors, and incorrect responses to coding challenges. But new iterations will likely move well past these.

The magnitude of the eliminations of white collar workers could be as high as 300 million jobs, according to a Goldman Sachs estimate, even as it may add 7% to global GDP. The redundancy of educated youth is not to be taken lightly, since one of the causes of many conflicts (not least World War 1) was (among other factors) an overeducated, underemployed class of restive youth in central Europe. The implications of eliminating such a class do not simply extend to the first world, but also pose an imminent threat to the freelancing armies of the third world, in countries such as Pakistan where workers are imminently replaceable by ChatGPT and other such apps. Therefore, disruption is imminent, and the system, and it is not easily replaceable by reskilling, reeducation, or retraining. Retrain for what? Reeducate in what? The fukuyamists have clearly lost the plot, and Gen Z cannot be accommodated into the longstanding equation of exchanging labor (even intellectual or creative labor) for subsistence.

This has been acknowledged, at times grudgingly, by AI experts. They have noted the need for universal basic income (UBI) programs to compensate for the disruption caused by AI. Yet the political class in the first world is still not amenable to such programs, particularly when post-Covid monetary contraction is underway. Something has to give, and it isn’t clear what. There is a full-blown arms race in AI that has been underway for quite some time, and it is only accelerating. But generative AI for its own sake leaves out the social condition in which younger generations are grappling with the fragile world economy. There are no easy answers that can adhere to fukuyamist thinking. The future of jobs is in peril, and where to take it from here? The premise of most dystopian science fiction lies in the gap between science and social science: whereas science advances in a linear fashion, social science is more cyclical and prone to reversion. Dystopias involve strong science and weak social science. This is the condition laid out by generative AI, the immense benefits notwithstanding.

For those seated and looking out from a comparatively comfortable vantage point, this author included, Taleb’s blunt statement rings true: a clear mind, deep knowledge, and original thought will outlast the large language models, because they represent the crux of the training data on which LLMs feed. Generative AI is probabilistic, but its best work still comes from drawing upon the best human minds. Those seated in such positions are also the ones who must venture the answers to the emergent conditions. It cannot be that, in addition to existential public value destruction from climate change, conventional wars, disinformation, political polarization, economic fragility, and social disharmony, there is now another sword of damocles that hangs over the generation that is only beginning to mature.

As such, we must appeal to such “comfy” minds to ponder over where generative AI and other forms of AI-based technologies are to be led in a social science sense, not just in a scientific sense, and they must do so while being mindful of the fukuyamist temptations that linger in an economic structure that is still deeply scarred by neoliberalism’s radical notions. Is seeking a pause to such research a feasible option? Should there be an international enforcement body or treaty organization to grapple with such technologies? Is AI still a national-level issue given its ramifications across the globe? Are politicians ready to contend with the complexities that scientists themselves don’t fully understand? Is it so hard for us to imagine the prognostications of Keynes a hundred years later? Such are the difficult questions that emerge in this new era.

Dr. Usman W. Chohan is Advisor (Economic Affairs and National Development) at the Centre for Aerospace & Security Studies, Islamabad, Pakistan. He can be reached at [email protected]


Share this article

Facebook
Twitter
LinkedIn

Recent Publications

Browse through the list of recent publications.

The Cover-up: IAF Narrative of the May 2025 Air Battle

Even after one year since the India-Pakistan May war of 2025, the Indian discourse regarding Operation Sindoor remains uncertain under its pretence of restraint. The Pahalgam attack on 22 April, which killed 26 people, triggered an escalatory spiral. New Delhi quickly accused Pakistan-linked elements, while Islamabad refuted the allegation and demanded an independent investigation. On 7 May, India launched attacks deep inside Pakistan under what it later termed as Operation Sindoor. The political motive was intended to turn the crisis into coercive signalling by shifting the blame onto the enemy and projecting a sense of military superiority.
This episode, however, began to fray immediately as war seldom follows the intended script. Within minutes PAF shot down 7 IAF aircraft including 4 Rafales. On 8 May, Reuters reported that at least two Indian aircraft were shot down by a Pakistani J-10C, while the local government sources reported other aircraft crashes in Indian-occupied Jammu and Kashmir

Read More »

Why the IAF’s Post-Sindoor Spending Surge is a Sign of Panic

After Operation Sindoor, India is spending billions of dollars on new weapons. This is being taken by many people as an indication of military prowess. It is not. This rush to procure weapons is in fact an acknowledgement that the Air Force in India had failed to do what it was meant to do. The costly jets and missiles that India had purchased over the years failed to yield the promised results.

Sindoor was soon followed by India in sealing the gaps which the operation had exposed. It was reported that Indian Air Force (IAF) is looking to speed up its purchases of more than 7 billion USD. This will involve other Rafale fighter jets with India already ordering 26 more Rafales to the Navy in 2024 at an estimated cost of about 3.9 billion USD. India is also seeking long-range standoff missiles, Israeli loitering munitions and increased drone capabilities. Special financial powers of the Indian military were activated to issue emergency procurement orders. The magnitude and rate of these purchases speak volumes.

Indian media and defence analysts have over the years considered the Rafale as a game changer. When India purchased 36 Rafales aircrafts at an approximate cost of 8.7 billion USD, analysts vowed that the aircraft would provide India with air superiority over Pakistan. Operation Sindoor disproved all those allegations. Indian aircraft did not even fly in Pakistani airspace when the fighting started. India solely depended on standoff weapons that were launched at a safe distance. The air defence system of Pakistan, comprising of the HQ-9 surface-to-air missile system and its own fighters, stood its ground.

Read More »

May 2025: Mosaic Warfare and the Myth of Centralised Air Power

Visualise a modern-day Air Force commander sitting in the operations room, miles away from the combat zone, overseeing every friendly and enemy aircraft and all assets involved in the campaign. In a split second, he can task a fighter, reposition a drone, and authorise a strike. In today’s promising technological era, he does not even need an operations room; a laptop on his desktop will suffice. The situation looks promising as it offers efficiency, precision, and control. The term used for such operational control is ‘centralisation’, which has been made possible with advanced networking, integrating space, cyber, surveillance, artificial intelligence, and seamless communication, enabling a single commander to manage an entire campaign from a single node. Centralised command and control, championed by the Western air forces and then adopted by many others, has thus been seen as a pinnacle of modern military power.
The concept of centralisation, enabled by state-of-the-art networking, may seem promising, but it is nothing more than a myth.

Read More »