Chat GPT

ChatGPT, the poster child of Generative Artificial Intelligence (GAI), has stormed the industry with its ability to transform productivity and automate repetitive tasks. It also retains the spotlight as some states are banning its use and experts are raising concerns about potential misuse. Italy has become the first European country to temporarily ban ChatGPT, citing a leak of users’ conversations with the AI chat bot and payment information along with concerns over its lack of transparency in data usage. The Italian data protection authority, Garante, has ordered Open AI (ChatGPT’s parent company) to immediately stop processing data of Italian citizens. Open AI has a deadline of 20 days to address these concerns or face a fine of USD 21.7 million in case of failure.

Following Italy’s decision, the European Consumer Organisation called on authorities to investigate all significant AI chat bots. Privacy regulators from Germany, France and Ireland have reached out to their Italian counterparts to learn more about the ban. While the regulators, which are independent of EU governments, do not rule out the possibility of similar bans; governments on the other hand are lenient and point that such actions may not be necessary. Previously, European nations adopted a united approach to deal with data protection and devised the most elaborate General Data Protection Regulations (GDPR). In any case, such discussions will lead initial regulations to govern the future of GAI and provide a possible pathway for other states to follow.

Experts and industry leaders have different reasons for worry. They believe that systems like ChatGPT are too powerful to be fully understood, predicted or reliably controlled even by their own creators. Several, including Elon Musk, Yuval Noah Harari, Steve Wozniak and Jaan Tallinn etc. have called upon all AI labs ‘to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,’ in an open letter. They further argue that ‘this [6 month] pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.’

While ChatGPT has various built in safety rails, still users around the world have reported various biases, inaccuracies and often misleading information generated by the AI chat bot. Moreover, methods to bypass these safety mechanisms, like jailbreaking, have also been developed. Open AI has been quick to fix these issues, and has initiated a ‘Bug Bounty Program’ to engage security researchers in finding security vulnerabilities in the system. However, the trends indicate a preference for selling these vulnerabilities to the highest bidder rather than going through the official channels.

Other than the unique challenges associated with ChatGPT, a much scarier dimension is the proliferation potential. Researchers at Stanford University have essentially replicated the ChatGPT model for just USD 600. Open source information and the GPT 3.5 itself played an important role in development of this Large Language Model, Alpaca. This low cost does not include the extensive post-training that ChatGPT has gone through. Hence, Alpaca has not been fine-tuned to be safe and harmless.

With such low barriers to entry, similar models can be employed by states and Non-State Actors alike; and without the necessary safeguards, such models can be used in devastating ways. Extremist political parties can use the GAI tools to create hateful content to target minorities, authoritarian governments can automate their propaganda campaigns and terrorist outfits can use these tools to fast track their recruitment drives.

Given such issues, national regulatory issues might be easier to address compared to the risk of unregulated spread of AI models across the world. This is because of two main reasons: 1) it appears to be extremely cheap and easy to replicate these models; and 2) the existing international non-proliferation framework is not designed to cater for such use cases. An added issue would be bringing such a technology under the ambit of traditional Export Control Regimes (ECRs) which are designed to either cater for WMD proliferation or military and dual- use technologies. Existing ECRs also do not enjoy universal trust that would be necessary to bring GAI under their regulatory fold and such efforts are likely to be seen as the developed world’s attempt to prevent democratisation of such tools.

In Pakistan, the use of ChatGPT is not only offering opportunities to freelancers and content creators, but also poses risks like spreading misinformation, hate speech, or other forms of harmful content. Lack of awareness among users about the limitations and ethical implications of ChatGPT can lead to unintended consequences. Moreover, there may be a tendency to find shortcuts and rely solely on ChatGPT for professional and academic requirements. Ideally, the government should not only look into appropriate regulations for ensuring its responsible use and accountability, including issues of data privacy and security, but also creating awareness among the population to navigate the ethical dilemmas.

Sameer Ali Khan is a Senior Research Associate at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. He can be reached at [email protected]


Share this article

Facebook
Twitter
LinkedIn

Recent Publications

Browse through the list of recent publications.

The Cover-up: IAF Narrative of the May 2025 Air Battle

Even after one year since the India-Pakistan May war of 2025, the Indian discourse regarding Operation Sindoor remains uncertain under its pretence of restraint. The Pahalgam attack on 22 April, which killed 26 people, triggered an escalatory spiral. New Delhi quickly accused Pakistan-linked elements, while Islamabad refuted the allegation and demanded an independent investigation. On 7 May, India launched attacks deep inside Pakistan under what it later termed as Operation Sindoor. The political motive was intended to turn the crisis into coercive signalling by shifting the blame onto the enemy and projecting a sense of military superiority.
This episode, however, began to fray immediately as war seldom follows the intended script. Within minutes PAF shot down 7 IAF aircraft including 4 Rafales. On 8 May, Reuters reported that at least two Indian aircraft were shot down by a Pakistani J-10C, while the local government sources reported other aircraft crashes in Indian-occupied Jammu and Kashmir

Read More »

Why the IAF’s Post-Sindoor Spending Surge is a Sign of Panic

After Operation Sindoor, India is spending billions of dollars on new weapons. This is being taken by many people as an indication of military prowess. It is not. This rush to procure weapons is in fact an acknowledgement that the Air Force in India had failed to do what it was meant to do. The costly jets and missiles that India had purchased over the years failed to yield the promised results.

Sindoor was soon followed by India in sealing the gaps which the operation had exposed. It was reported that Indian Air Force (IAF) is looking to speed up its purchases of more than 7 billion USD. This will involve other Rafale fighter jets with India already ordering 26 more Rafales to the Navy in 2024 at an estimated cost of about 3.9 billion USD. India is also seeking long-range standoff missiles, Israeli loitering munitions and increased drone capabilities. Special financial powers of the Indian military were activated to issue emergency procurement orders. The magnitude and rate of these purchases speak volumes.

Indian media and defence analysts have over the years considered the Rafale as a game changer. When India purchased 36 Rafales aircrafts at an approximate cost of 8.7 billion USD, analysts vowed that the aircraft would provide India with air superiority over Pakistan. Operation Sindoor disproved all those allegations. Indian aircraft did not even fly in Pakistani airspace when the fighting started. India solely depended on standoff weapons that were launched at a safe distance. The air defence system of Pakistan, comprising of the HQ-9 surface-to-air missile system and its own fighters, stood its ground.

Read More »

May 2025: Mosaic Warfare and the Myth of Centralised Air Power

Visualise a modern-day Air Force commander sitting in the operations room, miles away from the combat zone, overseeing every friendly and enemy aircraft and all assets involved in the campaign. In a split second, he can task a fighter, reposition a drone, and authorise a strike. In today’s promising technological era, he does not even need an operations room; a laptop on his desktop will suffice. The situation looks promising as it offers efficiency, precision, and control. The term used for such operational control is ‘centralisation’, which has been made possible with advanced networking, integrating space, cyber, surveillance, artificial intelligence, and seamless communication, enabling a single commander to manage an entire campaign from a single node. Centralised command and control, championed by the Western air forces and then adopted by many others, has thus been seen as a pinnacle of modern military power.
The concept of centralisation, enabled by state-of-the-art networking, may seem promising, but it is nothing more than a myth.

Read More »