Deepfakes-Shaza Arif

‘Your eyes can deceive you, don’t trust them’ was a famous dialogue by Obi-Wan Kenobi in the sci-fic movie Star Wars. The assertion seems to be closely related to the deepfake technology that has created a buzz given its rather astounding characteristics. Synthetically generated images, videos, text, or audio, created via using powerful Artificial Intelligence (AI), to manipulate digital content is becoming increasingly popular. The technology is not entirely bad as it is being used to assist people with speaking problems and create digital avatars of criminals for identification etc. However, from swapping faces of ordinary people with celebrities/politicians, bringing back famous deceased characters for educational or commercial purposes, creating forged audios to financial frauds, deepfakes are also playing a role in distorting reality. They have emerged as a source of concern across the board.

As AI-driven content continues to proliferate the digital landscape, the way deepfakes can manipulate information is alarming. They can not only influence the perception of individuals but could also impact their actions accordingly. It is particularly worrisome for countries where societies are politically or religiously polarised, and the spread of fake content could be disastrous. For instance, a fake video of a political leader being assassinated or an image of a particular ethnicity being attacked, when circulated on platforms like WhatsApp, could incite violence and chaos. 

Bans to curb a technology, that is designed to adapt and improve, could prove ineffective and is not really a pragmatic course of action due to the rapid technological race. Hence, what is needed is timely and effective regulation. China recently released a set of regulations in this regard. The regulations were released earlier by the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology (MIIT), and the Ministry of Public Security (MPS); and have been in effect since 10 January 2023 and aim to strengthen the integration and regulation of internet services, safeguard national security, and protect citizens’ legitimate rights and interests. It calls for deepfake service providers and supporters to abide by laws and regulations in key areas. CAC as regulator is responsible for enforcement of the 25 articles with the help of local telecommunications authorities, public security departments, and local network information department. As per the proposed regulations, the service providers must take the consent of the owner if their content is to be used by any deep synthesis technology. Synthetic content must also have a notification system to inform the users that deepfake technology has been used to alter their content. Deep synthesis services cannot be used for the dissipation of fake news, and the altered content should be clearly labeled/tagged to avoid any confusion. The real identity of the user must be authenticated before giving them access to the deep synthesis services/technology. The new measures propose that deep fake technology must not be used in any activity that is banned in the existing laws or administrative regulations or anything that conflicts with national security and interests, disrupts the economy, or adversely impacts the country’s national image. The regulations also call for establishing a complaint system to contain the spread of fake news. It also directs the service providers and supporters to review and inspect the synthesis algorithms and carry out continuous security assessments in accordance with relevant state regulations. Violation of the regulation is subject to punishment and criminal proceedings.

Regulations are needed to ensure a healthy digital landscape that promotes technological advancement and reduces risks associated with platforms that use AI or Machine Learning (ML) to modify online content. However, there are colossal challenges that stand in the way of enforcement. For instance, there needs to be more clarity regarding the process of getting consent of the owner for modifying their content. Transparency mechanisms of the regulations also need more elaboration. The classification of ‘fake news’ also remains subject to ambiguity. Freedom of speech could become a conflicting factor when any regulation is implemented. Moreover, the technology underlying deepfakes would always be accessible to individuals, which suggests that unlawful deepfakes will always remain a pressing issue. Nevertheless, China has made a timely attempt to curb the risk of generative AI tools.

Effectiveness of China’s legislation against the impending threats of deepfakes is yet to be seen. However, if the Chinese model proves successful, it could provide a potential framework that could be used by other states for future reference to develop a more effective strategy to detect, identify and regulate deepfakes. With time, more layers could be added to the regulations to make them robust.

There is no doubt that deepfakes will get more sophisticated, popular, and accessible in the future. It is time that states start investing in enforcement mechanisms to mitigate their dark side.

The writer is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. The article was first published in International Policy Digest. She can be reached at: [email protected].


Share this article

Facebook
Twitter
LinkedIn

Recent Publications

Browse through the list of recent publications.

The Cover-up: IAF Narrative of the May 2025 Air Battle

Even after one year since the India-Pakistan May war of 2025, the Indian discourse regarding Operation Sindoor remains uncertain under its pretence of restraint. The Pahalgam attack on 22 April, which killed 26 people, triggered an escalatory spiral. New Delhi quickly accused Pakistan-linked elements, while Islamabad refuted the allegation and demanded an independent investigation. On 7 May, India launched attacks deep inside Pakistan under what it later termed as Operation Sindoor. The political motive was intended to turn the crisis into coercive signalling by shifting the blame onto the enemy and projecting a sense of military superiority.
This episode, however, began to fray immediately as war seldom follows the intended script. Within minutes PAF shot down 7 IAF aircraft including 4 Rafales. On 8 May, Reuters reported that at least two Indian aircraft were shot down by a Pakistani J-10C, while the local government sources reported other aircraft crashes in Indian-occupied Jammu and Kashmir

Read More »

Why the IAF’s Post-Sindoor Spending Surge is a Sign of Panic

After Operation Sindoor, India is spending billions of dollars on new weapons. This is being taken by many people as an indication of military prowess. It is not. This rush to procure weapons is in fact an acknowledgement that the Air Force in India had failed to do what it was meant to do. The costly jets and missiles that India had purchased over the years failed to yield the promised results.

Sindoor was soon followed by India in sealing the gaps which the operation had exposed. It was reported that Indian Air Force (IAF) is looking to speed up its purchases of more than 7 billion USD. This will involve other Rafale fighter jets with India already ordering 26 more Rafales to the Navy in 2024 at an estimated cost of about 3.9 billion USD. India is also seeking long-range standoff missiles, Israeli loitering munitions and increased drone capabilities. Special financial powers of the Indian military were activated to issue emergency procurement orders. The magnitude and rate of these purchases speak volumes.

Indian media and defence analysts have over the years considered the Rafale as a game changer. When India purchased 36 Rafales aircrafts at an approximate cost of 8.7 billion USD, analysts vowed that the aircraft would provide India with air superiority over Pakistan. Operation Sindoor disproved all those allegations. Indian aircraft did not even fly in Pakistani airspace when the fighting started. India solely depended on standoff weapons that were launched at a safe distance. The air defence system of Pakistan, comprising of the HQ-9 surface-to-air missile system and its own fighters, stood its ground.

Read More »

May 2025: Mosaic Warfare and the Myth of Centralised Air Power

Visualise a modern-day Air Force commander sitting in the operations room, miles away from the combat zone, overseeing every friendly and enemy aircraft and all assets involved in the campaign. In a split second, he can task a fighter, reposition a drone, and authorise a strike. In today’s promising technological era, he does not even need an operations room; a laptop on his desktop will suffice. The situation looks promising as it offers efficiency, precision, and control. The term used for such operational control is ‘centralisation’, which has been made possible with advanced networking, integrating space, cyber, surveillance, artificial intelligence, and seamless communication, enabling a single commander to manage an entire campaign from a single node. Centralised command and control, championed by the Western air forces and then adopted by many others, has thus been seen as a pinnacle of modern military power.
The concept of centralisation, enabled by state-of-the-art networking, may seem promising, but it is nothing more than a myth.

Read More »