Etfa Khurshid-AI-MDS

Share this article

Facebook
Twitter
LinkedIn

Artificial intelligence (AI) has emerged as driving force behind transforming various aspects of our lives from making chores easier to enhancing industrial efficiency. However, as this technological advancement continues to expand at an unprecedented rate, fears of unregulated growth, analogous to a ‘runaway horse’ have emerged.

In order to address such concerns, the United Nations Security Council (UNSC) held a first ever meeting on addressing the risks of AI in July 2023. The 15-member Council was briefed on various aspects of AI that pose a threat to international peace. The potential application of AI by non-state entities was one of the major aspects discussed during the meeting that AI can be used to cause instability in the form of ‘3D’ – Destruction, Disinformation or Distress. For example, non-state actors could get easy access to AI-enabled technologies due to commercial availability of AI-enabled technologies, for instance the use of unmanned aerial vehicles or drones. There have been 440 reported cases of non-state actors using drones, according to a 2022 Brookings analysis. Since non-state actors have limited resources as compared to the State, AI-enabled technologies act as a force multiplier and enhance their capabilities to inflict damage.  

Regarding regulation of AI, UN Secretary General António Guterres proposed that there should be a global watchdog for the regulation, monitoring and enforcement of AI regulations. One cannot deny the significance of UN’s role in AI regulation since it would include perspectives from countries around the world, establishing a norm for AI ethics, guidelines and standards, and ensuring transparency in AI development and deployment. However, such a process could be time taking with countries having different interests and those with influence and power may coerce, cajole and lobby decisions in their own interest.

This is not the first time that concerns regarding AI rapid development haven been raised. In March 2023, leaders from various tech giants in a letter collectively called for pausing powerful AI systems. The letter came after the announcement of ChatGPT-4 and it stated that it called ‘on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.’ These concerns are centred on the potential risks of increasingly potent AI, including challenges related to misuse, bias, and the incapacity to manage or comprehend such advanced systems.

In addition to that, apprehensions regarding data privacy and monitoring have grown as AI becomes more pervasive in our lives. The ability of AI to process enormous volumes of data prompts alarms about who gets access to it and how it is used, which has the potential to turn it into a tool of oppression and deeper divides between the haves and have-nots.

With the AI market expected to reach USD 407 billion in 2027, and around 67% of the consumers around the world relying on AI tools for information rather than traditional search engines or other means such as books, journals and articles etc., bias and disinformation is one aspect that has raised significant concerns with expanding AI tools. Russian President Putin said ‘Whoever has the best Artificial Intelligence will rule the world’. Imagine the impact these AI tools can create on data bias, public narratives and risk assessments.

While the use of AI in everyday life is becoming increasingly significant, the importance of establishing regulations, addressing bias, protecting privacy, and fostering responsible development needs to be priortised. AI’s potential is vast, but so are the risks if we allow it to sprint unbridled. By steering AI development responsibly, we can harness its development for the betterment of humanity, ensuring a future where AI is a force for good and not a runaway horse trampling on ethical principles or undermining the very fabric of our societal values.

Etfa Khurshid Mirza is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. She can be reached at cass.thinkers@casstt.com.

Design Credit: Mysha Dua Salman

Recent Publications

Browse through the list of recent publications.

Humans in the Age of Generative AI

As the fourth Industrial Revolution unfolds, Artificial Intelligence – devouring computational power and big data – is fuelling an ‘AI Spring.’ This article outlines the trends in Generative AI and explores the need to invest in human capital through upskilling/reskilling programmes amid fears of AI replacing humans. It attempts to reframe the conversation and larger vision in a positive light such that primacy remains with humans.
  36 views

Read More »

The Conundrum of TTP in Pak-Afghan Relations

Over several decades, Pak-Afghan relations have been characterised by phases of turbulence and stability. The current phase of bilateral relations is also marked by relative friction between the two neighbours. The primary reason for the strained relationship is Pakistan’s concern about either the inability or lack of will by the interim Afghan government to rein in Tehreek-e-Taliban Pakistan (TTP).
21 views

Read More »

Work-from-Home to Vote-from-Home

The COVID-19 pandemic left behind many enduring legacies, with remote work, commonly known as Work-From-Home (WFH) being one of its more enduring ones. Back then, workplaces witnessed a remarkable revamp in routines, schedules and practices. Weekly office meetings shifted from conference rooms to living rooms via virtual meeting apps. Home desks assumed the role of office cabins, complete with the added benefit of flexible working hours in many instances.

11 views

Read More »

Stay Connected

Follow and Subscribe

Join Our Newsletter
And get notified everytime we publish new content.

© 2022 CASSTT ALL RIGHTS RESERVED

Developed By Team CASSTT

Contact CASS

CASS (Centre for Aerospace & Security Studies), Old Airport Road, Islamabad
+92 51 5405011
cass.thinkers@casstt.com
career@casstt.com

All views and opinions expressed or implied are those of the authors/speakers/internal and external scholars and should not be construed as carrying the official sanction of CASS.