Chat GPT

Share this article

Facebook
Twitter
LinkedIn

ChatGPT, the poster child of Generative Artificial Intelligence (GAI), has stormed the industry with its ability to transform productivity and automate repetitive tasks. It also retains the spotlight as some states are banning its use and experts are raising concerns about potential misuse. Italy has become the first European country to temporarily ban ChatGPT, citing a leak of users’ conversations with the AI chat bot and payment information along with concerns over its lack of transparency in data usage. The Italian data protection authority, Garante, has ordered Open AI (ChatGPT’s parent company) to immediately stop processing data of Italian citizens. Open AI has a deadline of 20 days to address these concerns or face a fine of USD 21.7 million in case of failure.

Following Italy’s decision, the European Consumer Organisation called on authorities to investigate all significant AI chat bots. Privacy regulators from Germany, France and Ireland have reached out to their Italian counterparts to learn more about the ban. While the regulators, which are independent of EU governments, do not rule out the possibility of similar bans; governments on the other hand are lenient and point that such actions may not be necessary. Previously, European nations adopted a united approach to deal with data protection and devised the most elaborate General Data Protection Regulations (GDPR). In any case, such discussions will lead initial regulations to govern the future of GAI and provide a possible pathway for other states to follow.

Experts and industry leaders have different reasons for worry. They believe that systems like ChatGPT are too powerful to be fully understood, predicted or reliably controlled even by their own creators. Several, including Elon Musk, Yuval Noah Harari, Steve Wozniak and Jaan Tallinn etc. have called upon all AI labs ‘to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,’ in an open letter. They further argue that ‘this [6 month] pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.’

While ChatGPT has various built in safety rails, still users around the world have reported various biases, inaccuracies and often misleading information generated by the AI chat bot. Moreover, methods to bypass these safety mechanisms, like jailbreaking, have also been developed. Open AI has been quick to fix these issues, and has initiated a ‘Bug Bounty Program’ to engage security researchers in finding security vulnerabilities in the system. However, the trends indicate a preference for selling these vulnerabilities to the highest bidder rather than going through the official channels.

Other than the unique challenges associated with ChatGPT, a much scarier dimension is the proliferation potential. Researchers at Stanford University have essentially replicated the ChatGPT model for just USD 600. Open source information and the GPT 3.5 itself played an important role in development of this Large Language Model, Alpaca. This low cost does not include the extensive post-training that ChatGPT has gone through. Hence, Alpaca has not been fine-tuned to be safe and harmless.

With such low barriers to entry, similar models can be employed by states and Non-State Actors alike; and without the necessary safeguards, such models can be used in devastating ways. Extremist political parties can use the GAI tools to create hateful content to target minorities, authoritarian governments can automate their propaganda campaigns and terrorist outfits can use these tools to fast track their recruitment drives.

Given such issues, national regulatory issues might be easier to address compared to the risk of unregulated spread of AI models across the world. This is because of two main reasons: 1) it appears to be extremely cheap and easy to replicate these models; and 2) the existing international non-proliferation framework is not designed to cater for such use cases. An added issue would be bringing such a technology under the ambit of traditional Export Control Regimes (ECRs) which are designed to either cater for WMD proliferation or military and dual- use technologies. Existing ECRs also do not enjoy universal trust that would be necessary to bring GAI under their regulatory fold and such efforts are likely to be seen as the developed world’s attempt to prevent democratisation of such tools.

In Pakistan, the use of ChatGPT is not only offering opportunities to freelancers and content creators, but also poses risks like spreading misinformation, hate speech, or other forms of harmful content. Lack of awareness among users about the limitations and ethical implications of ChatGPT can lead to unintended consequences. Moreover, there may be a tendency to find shortcuts and rely solely on ChatGPT for professional and academic requirements. Ideally, the government should not only look into appropriate regulations for ensuring its responsible use and accountability, including issues of data privacy and security, but also creating awareness among the population to navigate the ethical dilemmas.

Sameer Ali Khan is a Senior Research Associate at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. He can be reached at cass.thinkers@casstt.com

Recent Publications

Browse through the list of recent publications.

Humans in the Age of Generative AI

As the fourth Industrial Revolution unfolds, Artificial Intelligence – devouring computational power and big data – is fuelling an ‘AI Spring.’ This article outlines the trends in Generative AI and explores the need to invest in human capital through upskilling/reskilling programmes amid fears of AI replacing humans. It attempts to reframe the conversation and larger vision in a positive light such that primacy remains with humans.
  30 views

Read More »

The Conundrum of TTP in Pak-Afghan Relations

Over several decades, Pak-Afghan relations have been characterised by phases of turbulence and stability. The current phase of bilateral relations is also marked by relative friction between the two neighbours. The primary reason for the strained relationship is Pakistan’s concern about either the inability or lack of will by the interim Afghan government to rein in Tehreek-e-Taliban Pakistan (TTP).
19 views

Read More »

Work-from-Home to Vote-from-Home

The COVID-19 pandemic left behind many enduring legacies, with remote work, commonly known as Work-From-Home (WFH) being one of its more enduring ones. Back then, workplaces witnessed a remarkable revamp in routines, schedules and practices. Weekly office meetings shifted from conference rooms to living rooms via virtual meeting apps. Home desks assumed the role of office cabins, complete with the added benefit of flexible working hours in many instances.

9 views

Read More »

Stay Connected

Follow and Subscribe

Join Our Newsletter
And get notified everytime we publish new content.

© 2022 CASSTT ALL RIGHTS RESERVED

Developed By Team CASSTT

Contact CASS

CASS (Centre for Aerospace & Security Studies), Old Airport Road, Islamabad
+92 51 5405011
cass.thinkers@casstt.com
career@casstt.com

All views and opinions expressed or implied are those of the authors/speakers/internal and external scholars and should not be construed as carrying the official sanction of CASS.