Ujala Siddiq Khan-Ethical Challenges AI-MDS-2

Research and Development on Artificial Intelligence (AI) has been making strides for several years but it has only recently started making headlines on an almost daily basis given its impact on various aspects of day-to-day life. Its integration into the systems and processes of the digital world has brought unprecedented revolutions in almost every field, from science and technology to art and literature. For instance, Elon Musk’s Advanced AI machine-learning systems can absorb data and self-improve, potentially leading to a singularity—where AI is likely to (perhaps dangerously) progress uncontrollably, outpacing human abilities—sooner than expected. In this context, there are debates regarding the ethical challenges that it creates and the ways in which AI can and needs to be regulated.

The upside is that AI is aiding people in various fields such as finance, retail, manufacturing, transportation, education, entertainment, hospitality, health, security etc. In weather predictions, satellites utilise AI to process large amounts of data within seconds. The analytical prowess of AI enables accurate forecasting, making it feasible for planners to prepare for extreme weather events beforehand. Similarly, AI is playing a tremendous role in healthcare and cybersecurity by processing vast amounts of data at a swift pace, facilitating quick service delivery and threat mitigation.

The downside is that while AI brings convenience into our lives, it also comes with certain drawbacks. The first challenge is bias and perception. The quality of AI systems relies on the data that they receive. If the collected data is biased, the system will also have biased output.  The lack of transparency concerning the use of AI is another issue. These systems are intricate and hard to comprehend. Global forums are discussing whether some data and algorithms should be made public and, if so, to what extent. Another risk is privacy concerns, as an AI system can collect vast amounts of data about individuals and organisations. Algorithms can twist stories and create narratives by manipulating information across the web. Generative AI can be used for malicious purposes such as infiltration of harmful cybercrimes, greatly disturbing security and stability within communities. Trustworthy AI presents another complex challenge. Ensuring that AI systems align with our values, respect our rights, and establishing methods to measure their performance and impact are significant questions.

Hence, there are growing concerns about the increasing power of AI and the ethical dilemmas it brings. These worries touch on issues like unequal access to innovation and social divisions, challenging the idea of fair progress and shared benefits. As AI develops, it also raises questions about personal rights, such as identity and personal freedom.

States must integrate effective AI policies and regulations into their national security frameworks. These policies and regulations should prioritise the incorporation of AI-relevant curriculum at the national level, crucial for fostering an agile and vigilant environment. This educational approach can empower citizens to recognise and address the ethical challenges posed by AI.

Since the inception of AI policy development, beginning with Canada devising its own AI policy in 2017, around 60 countries have adopted some form of national AI strategy to date. While the formation of such policies within states is in its early stages, establishing harmony among various states’ AI policies could pave the way for easier international harmonisation. Such coordination would enable nations to collaboratively engage in detailed discussions to establish a shared agenda. The aim of a global AI policy should be to establish a unified approach involving all stakeholders. Another mechanism can be through the integration of a catalogue of tools and metrics into AI systems. This catalogue can serve as a comprehensive solution for anyone to exchange methods, practices, and mechanisms for implementing reliable AI. It can also guide on how to use these tools in different contexts and scenarios. One such catalogue has been initiated by the Alan Turing Institute in collaboration with the European Commission.

In conclusion, to regulate AI, nations need to develop their national AI strategies while recognising the importance of global cooperation on knowledge sharing, community building, capacity building and collaboration around AI standards. Such measures are crucial to establish a mature, ethical Al governance ecosystem.

Ujala Siddiq Khan is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. She can be reached at [email protected]

Design credit: Mysha Dua Salman


Share this article

Facebook
Twitter
LinkedIn

Recent Publications

Browse through the list of recent publications.

The Cover-up: IAF Narrative of the May 2025 Air Battle

Even after one year since the India-Pakistan May war of 2025, the Indian discourse regarding Operation Sindoor remains uncertain under its pretence of restraint. The Pahalgam attack on 22 April, which killed 26 people, triggered an escalatory spiral. New Delhi quickly accused Pakistan-linked elements, while Islamabad refuted the allegation and demanded an independent investigation. On 7 May, India launched attacks deep inside Pakistan under what it later termed as Operation Sindoor. The political motive was intended to turn the crisis into coercive signalling by shifting the blame onto the enemy and projecting a sense of military superiority.
This episode, however, began to fray immediately as war seldom follows the intended script. Within minutes PAF shot down 7 IAF aircraft including 4 Rafales. On 8 May, Reuters reported that at least two Indian aircraft were shot down by a Pakistani J-10C, while the local government sources reported other aircraft crashes in Indian-occupied Jammu and Kashmir

Read More »

Why the IAF’s Post-Sindoor Spending Surge is a Sign of Panic

After Operation Sindoor, India is spending billions of dollars on new weapons. This is being taken by many people as an indication of military prowess. It is not. This rush to procure weapons is in fact an acknowledgement that the Air Force in India had failed to do what it was meant to do. The costly jets and missiles that India had purchased over the years failed to yield the promised results.

Sindoor was soon followed by India in sealing the gaps which the operation had exposed. It was reported that Indian Air Force (IAF) is looking to speed up its purchases of more than 7 billion USD. This will involve other Rafale fighter jets with India already ordering 26 more Rafales to the Navy in 2024 at an estimated cost of about 3.9 billion USD. India is also seeking long-range standoff missiles, Israeli loitering munitions and increased drone capabilities. Special financial powers of the Indian military were activated to issue emergency procurement orders. The magnitude and rate of these purchases speak volumes.

Indian media and defence analysts have over the years considered the Rafale as a game changer. When India purchased 36 Rafales aircrafts at an approximate cost of 8.7 billion USD, analysts vowed that the aircraft would provide India with air superiority over Pakistan. Operation Sindoor disproved all those allegations. Indian aircraft did not even fly in Pakistani airspace when the fighting started. India solely depended on standoff weapons that were launched at a safe distance. The air defence system of Pakistan, comprising of the HQ-9 surface-to-air missile system and its own fighters, stood its ground.

Read More »

May 2025: Mosaic Warfare and the Myth of Centralised Air Power

Visualise a modern-day Air Force commander sitting in the operations room, miles away from the combat zone, overseeing every friendly and enemy aircraft and all assets involved in the campaign. In a split second, he can task a fighter, reposition a drone, and authorise a strike. In today’s promising technological era, he does not even need an operations room; a laptop on his desktop will suffice. The situation looks promising as it offers efficiency, precision, and control. The term used for such operational control is ‘centralisation’, which has been made possible with advanced networking, integrating space, cyber, surveillance, artificial intelligence, and seamless communication, enabling a single commander to manage an entire campaign from a single node. Centralised command and control, championed by the Western air forces and then adopted by many others, has thus been seen as a pinnacle of modern military power.
The concept of centralisation, enabled by state-of-the-art networking, may seem promising, but it is nothing more than a myth.

Read More »