‘Your eyes can deceive you, don’t trust them’ was a famous dialogue by Obi-Wan Kenobi in the sci-fic movie Star Wars. The assertion seems to be closely related to the deepfake technology that has created a buzz given its rather astounding characteristics. Synthetically generated images, videos, text, or audio, created via using powerful Artificial Intelligence (AI), to manipulate digital content is becoming increasingly popular. The technology is not entirely bad as it is being used to assist people with speaking problems and create digital avatars of criminals for identification etc. However, from swapping faces of ordinary people with celebrities/politicians, bringing back famous deceased characters for educational or commercial purposes, creating forged audios to financial frauds, deepfakes are also playing a role in distorting reality. They have emerged as a source of concern across the board.
As AI-driven content continues to proliferate the digital landscape, the way deepfakes can manipulate information is alarming. They can not only influence the perception of individuals but could also impact their actions accordingly. It is particularly worrisome for countries where societies are politically or religiously polarised, and the spread of fake content could be disastrous. For instance, a fake video of a political leader being assassinated or an image of a particular ethnicity being attacked, when circulated on platforms like WhatsApp, could incite violence and chaos.
Bans to curb a technology, that is designed to adapt and improve, could prove ineffective and is not really a pragmatic course of action due to the rapid technological race. Hence, what is needed is timely and effective regulation. China recently released a set of regulations in this regard. The regulations were released earlier by the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology (MIIT), and the Ministry of Public Security (MPS); and have been in effect since 10 January 2023 and aim to strengthen the integration and regulation of internet services, safeguard national security, and protect citizens’ legitimate rights and interests. It calls for deepfake service providers and supporters to abide by laws and regulations in key areas. CAC as regulator is responsible for enforcement of the 25 articles with the help of local telecommunications authorities, public security departments, and local network information department. As per the proposed regulations, the service providers must take the consent of the owner if their content is to be used by any deep synthesis technology. Synthetic content must also have a notification system to inform the users that deepfake technology has been used to alter their content. Deep synthesis services cannot be used for the dissipation of fake news, and the altered content should be clearly labeled/tagged to avoid any confusion. The real identity of the user must be authenticated before giving them access to the deep synthesis services/technology. The new measures propose that deep fake technology must not be used in any activity that is banned in the existing laws or administrative regulations or anything that conflicts with national security and interests, disrupts the economy, or adversely impacts the country’s national image. The regulations also call for establishing a complaint system to contain the spread of fake news. It also directs the service providers and supporters to review and inspect the synthesis algorithms and carry out continuous security assessments in accordance with relevant state regulations. Violation of the regulation is subject to punishment and criminal proceedings.
Regulations are needed to ensure a healthy digital landscape that promotes technological advancement and reduces risks associated with platforms that use AI or Machine Learning (ML) to modify online content. However, there are colossal challenges that stand in the way of enforcement. For instance, there needs to be more clarity regarding the process of getting consent of the owner for modifying their content. Transparency mechanisms of the regulations also need more elaboration. The classification of ‘fake news’ also remains subject to ambiguity. Freedom of speech could become a conflicting factor when any regulation is implemented. Moreover, the technology underlying deepfakes would always be accessible to individuals, which suggests that unlawful deepfakes will always remain a pressing issue. Nevertheless, China has made a timely attempt to curb the risk of generative AI tools.
Effectiveness of China’s legislation against the impending threats of deepfakes is yet to be seen. However, if the Chinese model proves successful, it could provide a potential framework that could be used by other states for future reference to develop a more effective strategy to detect, identify and regulate deepfakes. With time, more layers could be added to the regulations to make them robust.
There is no doubt that deepfakes will get more sophisticated, popular, and accessible in the future. It is time that states start investing in enforcement mechanisms to mitigate their dark side.
The writer is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. The article was first published in International Policy Digest. She can be reached at: cass.thinkers@casstt.com.