Bakhtawar Iftikhar-Generative AI-MDS

We live in an era where it is common to see AI-generated content with political undertones on social media. Generative Artificial Intelligence (AI) tools like DALL E, ChatGPT etc. help create content in a format of one’s choosing: audio, video or text. Proliferation of these tools have raised concerns about their misuse to spread disinformation. Accordingly, this article frames the threat of AI-generated disinformation within the political sphere and offers recommendations to address the issue. 

As per the World Economic Forum’s ‘Perception Survey of Global Risks by Severity’, AI-generated mis/disinformation ranks 2nd in the current (2024) risk landscape. However, this phenomenon is not new, nor exclusive to AI. Historically, disinformation has always been used to further political agendas. For example, the Sophists acted as mouths-for-hire in Ancient Greece; the printing press was used by the British to ridicule Napoleon — portraying the military commander as unusually short; similarly, sensational headlines spread anti-Spanish sentiment amid the Spanish-American war. In recent history but before AI, doctored images or photoshop were used to generate propaganda against rival political leaders. Thus, these tactics have taken root as part of political campaigns.

Now that AI tools are widely available, the phenomenon continues only via a different means. Recent examples from the global ‘Year of Elections’ include ‘a deepfake robocall of President Biden aimed to suppress New Hampshire Democrat voters.’ Similarly, in Pakistan, a proxy group reportedly used an AI-generated image of an opposition leader next to Adolf Hitler for propaganda against his party. Such incidents are only expected to increase, where lies or exaggerated claims are deliberately disseminated in large proportions.

Admittedly, AI does exacerbate the threat for two reasons: by increasing the scale and speed of disinformation, and by making it look more realistic and thereby, more persuasive. These characteristics have led experts to fear that AI will ‘supercharge online disinformation campaigns.’

If people believe fake news, there is a risk of societal and political polarisation. When limited digital literacy is coupled with reliance on social media as primary sources of information, the barrage of disinformation curated through AI would fatally impact the standing of truth in society.  Even if people do not believe what they see or hear, it would still be harder for them to discern fact from fiction. The cacophony of noisy chants would overstimulate their cognition and thus, hinder their ability to make informed choices — be it as citizens or voters.

Therefore, it is necessary to exercise caution in this regard. Unraveling the complex web of half-truths created by AI is certainly a daunting task and the challenge is two-fold: to address disinformation and to challenge the acceptability of propaganda as a modus operandi in political campaigns.

The first challenge can be addressed by focusing on the delivery systems, i.e., social media. Governments must engage with social media companies, given their control over these digital public spaces and encourage them to invest more in fact-checking aka ‘Trust and Safety’ departments. These measures are especially crucial before elections around the world in 2024. Even though significant progress has been made in this regard, Nighat Dad – a member of Meta’s oversight board – states that these companies give more importance to elections in Western democracies. Developing countries remain largely neglected.

Moreover, ‘societal antibodies’ may develop, whereby people become skeptical of accepting the content they see as true. Individuals can also identify if the content they are seeing is AI-generated by looking for giveaway signs such as repetitive patterns, short sentences etc. However, Andy Carvin, a Managing Editor and Senior Fellow at Digital Forensic Research Lab (DFRLab) notes that as AI improves, ‘it’s only a matter of time before it becomes nearly impossible to tell the difference between what’s human-generated and what’s AI-generated.’ For that scenario, investing in cryptographic techniques and watermarking to identify AI-generated content may prove effective.

However, these technical solutions are insufficient to address the larger socio-political problem. Thus, the second and more important task is to cultivate a healthy political culture, where propaganda has little acceptability and political campaigns are issue-based instead. If political actors focus on building informed and authentic narratives, not only would they combat disinformation but also contribute to enhancing civic capacity of the masses – an indispensable element of healthy democracies.

To conclude, even though AI poses significant challenges in the battle against disinformation, it is merely a magnifying mirror, in which we catch a vivid glimpse of our shortcomings. By overly attributing ills to AI, we deny ourselves agency and evade responsibility, perhaps to take a technological cover to mask human folly.

Bakhtawar Iftikhar is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. The article was first published in The News International. She can be reached at [email protected].

Design Credit:  Mysha Dua Salman


Share this article

Facebook
Twitter
LinkedIn

Recent Publications

Browse through the list of recent publications.

The Cover-up: IAF Narrative of the May 2025 Air Battle

Even after one year since the India-Pakistan May war of 2025, the Indian discourse regarding Operation Sindoor remains uncertain under its pretence of restraint. The Pahalgam attack on 22 April, which killed 26 people, triggered an escalatory spiral. New Delhi quickly accused Pakistan-linked elements, while Islamabad refuted the allegation and demanded an independent investigation. On 7 May, India launched attacks deep inside Pakistan under what it later termed as Operation Sindoor. The political motive was intended to turn the crisis into coercive signalling by shifting the blame onto the enemy and projecting a sense of military superiority.
This episode, however, began to fray immediately as war seldom follows the intended script. Within minutes PAF shot down 7 IAF aircraft including 4 Rafales. On 8 May, Reuters reported that at least two Indian aircraft were shot down by a Pakistani J-10C, while the local government sources reported other aircraft crashes in Indian-occupied Jammu and Kashmir

Read More »

Why the IAF’s Post-Sindoor Spending Surge is a Sign of Panic

After Operation Sindoor, India is spending billions of dollars on new weapons. This is being taken by many people as an indication of military prowess. It is not. This rush to procure weapons is in fact an acknowledgement that the Air Force in India had failed to do what it was meant to do. The costly jets and missiles that India had purchased over the years failed to yield the promised results.

Sindoor was soon followed by India in sealing the gaps which the operation had exposed. It was reported that Indian Air Force (IAF) is looking to speed up its purchases of more than 7 billion USD. This will involve other Rafale fighter jets with India already ordering 26 more Rafales to the Navy in 2024 at an estimated cost of about 3.9 billion USD. India is also seeking long-range standoff missiles, Israeli loitering munitions and increased drone capabilities. Special financial powers of the Indian military were activated to issue emergency procurement orders. The magnitude and rate of these purchases speak volumes.

Indian media and defence analysts have over the years considered the Rafale as a game changer. When India purchased 36 Rafales aircrafts at an approximate cost of 8.7 billion USD, analysts vowed that the aircraft would provide India with air superiority over Pakistan. Operation Sindoor disproved all those allegations. Indian aircraft did not even fly in Pakistani airspace when the fighting started. India solely depended on standoff weapons that were launched at a safe distance. The air defence system of Pakistan, comprising of the HQ-9 surface-to-air missile system and its own fighters, stood its ground.

Read More »

May 2025: Mosaic Warfare and the Myth of Centralised Air Power

Visualise a modern-day Air Force commander sitting in the operations room, miles away from the combat zone, overseeing every friendly and enemy aircraft and all assets involved in the campaign. In a split second, he can task a fighter, reposition a drone, and authorise a strike. In today’s promising technological era, he does not even need an operations room; a laptop on his desktop will suffice. The situation looks promising as it offers efficiency, precision, and control. The term used for such operational control is ‘centralisation’, which has been made possible with advanced networking, integrating space, cyber, surveillance, artificial intelligence, and seamless communication, enabling a single commander to manage an entire campaign from a single node. Centralised command and control, championed by the Western air forces and then adopted by many others, has thus been seen as a pinnacle of modern military power.
The concept of centralisation, enabled by state-of-the-art networking, may seem promising, but it is nothing more than a myth.

Read More »