Chat GPT


Share this article

Facebook
Twitter
LinkedIn

ChatGPT, the poster child of Generative Artificial Intelligence (GAI), has stormed the industry with its ability to transform productivity and automate repetitive tasks. It also retains the spotlight as some states are banning its use and experts are raising concerns about potential misuse. Italy has become the first European country to temporarily ban ChatGPT, citing a leak of users’ conversations with the AI chat bot and payment information along with concerns over its lack of transparency in data usage. The Italian data protection authority, Garante, has ordered Open AI (ChatGPT’s parent company) to immediately stop processing data of Italian citizens. Open AI has a deadline of 20 days to address these concerns or face a fine of USD 21.7 million in case of failure.

Following Italy’s decision, the European Consumer Organisation called on authorities to investigate all significant AI chat bots. Privacy regulators from Germany, France and Ireland have reached out to their Italian counterparts to learn more about the ban. While the regulators, which are independent of EU governments, do not rule out the possibility of similar bans; governments on the other hand are lenient and point that such actions may not be necessary. Previously, European nations adopted a united approach to deal with data protection and devised the most elaborate General Data Protection Regulations (GDPR). In any case, such discussions will lead initial regulations to govern the future of GAI and provide a possible pathway for other states to follow.

Experts and industry leaders have different reasons for worry. They believe that systems like ChatGPT are too powerful to be fully understood, predicted or reliably controlled even by their own creators. Several, including Elon Musk, Yuval Noah Harari, Steve Wozniak and Jaan Tallinn etc. have called upon all AI labs ‘to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,’ in an open letter. They further argue that ‘this [6 month] pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.’

While ChatGPT has various built in safety rails, still users around the world have reported various biases, inaccuracies and often misleading information generated by the AI chat bot. Moreover, methods to bypass these safety mechanisms, like jailbreaking, have also been developed. Open AI has been quick to fix these issues, and has initiated a ‘Bug Bounty Program’ to engage security researchers in finding security vulnerabilities in the system. However, the trends indicate a preference for selling these vulnerabilities to the highest bidder rather than going through the official channels.

Other than the unique challenges associated with ChatGPT, a much scarier dimension is the proliferation potential. Researchers at Stanford University have essentially replicated the ChatGPT model for just USD 600. Open source information and the GPT 3.5 itself played an important role in development of this Large Language Model, Alpaca. This low cost does not include the extensive post-training that ChatGPT has gone through. Hence, Alpaca has not been fine-tuned to be safe and harmless.

With such low barriers to entry, similar models can be employed by states and Non-State Actors alike; and without the necessary safeguards, such models can be used in devastating ways. Extremist political parties can use the GAI tools to create hateful content to target minorities, authoritarian governments can automate their propaganda campaigns and terrorist outfits can use these tools to fast track their recruitment drives.

Given such issues, national regulatory issues might be easier to address compared to the risk of unregulated spread of AI models across the world. This is because of two main reasons: 1) it appears to be extremely cheap and easy to replicate these models; and 2) the existing international non-proliferation framework is not designed to cater for such use cases. An added issue would be bringing such a technology under the ambit of traditional Export Control Regimes (ECRs) which are designed to either cater for WMD proliferation or military and dual- use technologies. Existing ECRs also do not enjoy universal trust that would be necessary to bring GAI under their regulatory fold and such efforts are likely to be seen as the developed world’s attempt to prevent democratisation of such tools.

In Pakistan, the use of ChatGPT is not only offering opportunities to freelancers and content creators, but also poses risks like spreading misinformation, hate speech, or other forms of harmful content. Lack of awareness among users about the limitations and ethical implications of ChatGPT can lead to unintended consequences. Moreover, there may be a tendency to find shortcuts and rely solely on ChatGPT for professional and academic requirements. Ideally, the government should not only look into appropriate regulations for ensuring its responsible use and accountability, including issues of data privacy and security, but also creating awareness among the population to navigate the ethical dilemmas.

Sameer Ali Khan is a Senior Research Associate at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. He can be reached at cass.thinkers@casstt.com

Recent Publications

Browse through the list of recent publications.