Artificial intelligence (AI) has emerged as driving force behind transforming various aspects of our lives from making chores easier to enhancing industrial efficiency. However, as this technological advancement continues to expand at an unprecedented rate, fears of unregulated growth, analogous to a ‘runaway horse’ have emerged.
In order to address such concerns, the United Nations Security Council (UNSC) held a first ever meeting on addressing the risks of AI in July 2023. The 15-member Council was briefed on various aspects of AI that pose a threat to international peace. The potential application of AI by non-state entities was one of the major aspects discussed during the meeting that AI can be used to cause instability in the form of ‘3D’ – Destruction, Disinformation or Distress. For example, non-state actors could get easy access to AI-enabled technologies due to commercial availability of AI-enabled technologies, for instance the use of unmanned aerial vehicles or drones. There have been 440 reported cases of non-state actors using drones, according to a 2022 Brookings analysis. Since non-state actors have limited resources as compared to the State, AI-enabled technologies act as a force multiplier and enhance their capabilities to inflict damage.
Regarding regulation of AI, UN Secretary General António Guterres proposed that there should be a global watchdog for the regulation, monitoring and enforcement of AI regulations. One cannot deny the significance of UN’s role in AI regulation since it would include perspectives from countries around the world, establishing a norm for AI ethics, guidelines and standards, and ensuring transparency in AI development and deployment. However, such a process could be time taking with countries having different interests and those with influence and power may coerce, cajole and lobby decisions in their own interest.
This is not the first time that concerns regarding AI rapid development haven been raised. In March 2023, leaders from various tech giants in a letter collectively called for pausing powerful AI systems. The letter came after the announcement of ChatGPT-4 and it stated that it called ‘on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.’ These concerns are centred on the potential risks of increasingly potent AI, including challenges related to misuse, bias, and the incapacity to manage or comprehend such advanced systems.
In addition to that, apprehensions regarding data privacy and monitoring have grown as AI becomes more pervasive in our lives. The ability of AI to process enormous volumes of data prompts alarms about who gets access to it and how it is used, which has the potential to turn it into a tool of oppression and deeper divides between the haves and have-nots.
With the AI market expected to reach USD 407 billion in 2027, and around 67% of the consumers around the world relying on AI tools for information rather than traditional search engines or other means such as books, journals and articles etc., bias and disinformation is one aspect that has raised significant concerns with expanding AI tools. Russian President Putin said ‘Whoever has the best Artificial Intelligence will rule the world’. Imagine the impact these AI tools can create on data bias, public narratives and risk assessments.
While the use of AI in everyday life is becoming increasingly significant, the importance of establishing regulations, addressing bias, protecting privacy, and fostering responsible development needs to be priortised. AI’s potential is vast, but so are the risks if we allow it to sprint unbridled. By steering AI development responsibly, we can harness its development for the betterment of humanity, ensuring a future where AI is a force for good and not a runaway horse trampling on ethical principles or undermining the very fabric of our societal values.
Etfa Khurshid Mirza is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. She can be reached at cass.thinkers@casstt.com.
Design Credit: Mysha Dua Salman