Ujala Siddiq Khan-Ethical Challenges AI-MDS-2


Share this article

Facebook
Twitter
LinkedIn

Research and Development on Artificial Intelligence (AI) has been making strides for several years but it has only recently started making headlines on an almost daily basis given its impact on various aspects of day-to-day life. Its integration into the systems and processes of the digital world has brought unprecedented revolutions in almost every field, from science and technology to art and literature. For instance, Elon Musk’s Advanced AI machine-learning systems can absorb data and self-improve, potentially leading to a singularity—where AI is likely to (perhaps dangerously) progress uncontrollably, outpacing human abilities—sooner than expected. In this context, there are debates regarding the ethical challenges that it creates and the ways in which AI can and needs to be regulated.

The upside is that AI is aiding people in various fields such as finance, retail, manufacturing, transportation, education, entertainment, hospitality, health, security etc. In weather predictions, satellites utilise AI to process large amounts of data within seconds. The analytical prowess of AI enables accurate forecasting, making it feasible for planners to prepare for extreme weather events beforehand. Similarly, AI is playing a tremendous role in healthcare and cybersecurity by processing vast amounts of data at a swift pace, facilitating quick service delivery and threat mitigation.

The downside is that while AI brings convenience into our lives, it also comes with certain drawbacks. The first challenge is bias and perception. The quality of AI systems relies on the data that they receive. If the collected data is biased, the system will also have biased output.  The lack of transparency concerning the use of AI is another issue. These systems are intricate and hard to comprehend. Global forums are discussing whether some data and algorithms should be made public and, if so, to what extent. Another risk is privacy concerns, as an AI system can collect vast amounts of data about individuals and organisations. Algorithms can twist stories and create narratives by manipulating information across the web. Generative AI can be used for malicious purposes such as infiltration of harmful cybercrimes, greatly disturbing security and stability within communities. Trustworthy AI presents another complex challenge. Ensuring that AI systems align with our values, respect our rights, and establishing methods to measure their performance and impact are significant questions.

Hence, there are growing concerns about the increasing power of AI and the ethical dilemmas it brings. These worries touch on issues like unequal access to innovation and social divisions, challenging the idea of fair progress and shared benefits. As AI develops, it also raises questions about personal rights, such as identity and personal freedom.

States must integrate effective AI policies and regulations into their national security frameworks. These policies and regulations should prioritise the incorporation of AI-relevant curriculum at the national level, crucial for fostering an agile and vigilant environment. This educational approach can empower citizens to recognise and address the ethical challenges posed by AI.

Since the inception of AI policy development, beginning with Canada devising its own AI policy in 2017, around 60 countries have adopted some form of national AI strategy to date. While the formation of such policies within states is in its early stages, establishing harmony among various states’ AI policies could pave the way for easier international harmonisation. Such coordination would enable nations to collaboratively engage in detailed discussions to establish a shared agenda. The aim of a global AI policy should be to establish a unified approach involving all stakeholders. Another mechanism can be through the integration of a catalogue of tools and metrics into AI systems. This catalogue can serve as a comprehensive solution for anyone to exchange methods, practices, and mechanisms for implementing reliable AI. It can also guide on how to use these tools in different contexts and scenarios. One such catalogue has been initiated by the Alan Turing Institute in collaboration with the European Commission.

In conclusion, to regulate AI, nations need to develop their national AI strategies while recognising the importance of global cooperation on knowledge sharing, community building, capacity building and collaboration around AI standards. Such measures are crucial to establish a mature, ethical Al governance ecosystem.

Ujala Siddiq Khan is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. She can be reached at cass.thinkers@casstt.com

Design credit: Mysha Dua Salman

Recent Publications

Browse through the list of recent publications.