Inside Shadows of AI-Bakhtawar-MDS

In Artificial Intelligence, the inability to fully comprehend the inner workings of a Deep Neural Network (DNN) or deep learning systems is called the ‘Black Box Problem.’ Being impenetrable and opaque, the AI Black Box does not explain how exactly artificial neurons transmit signals to transform input into output. Consequently, experts in Natural Language Processing (NLP) and computational linguistics, like Sam Bowman, grapple with the intricate complexities of these systems, often likened to ‘millions of numbers moving around a few hundred times a second’. While this opacity challenges our trust and underscores potential risks, the solution lies not in abandoning AI models but in striving to understand them better.

Among the diverse AI capabilities being harnessed in the 21st Century, Deep Neural Networks (DNNs) stand out as a subset of Machine Learning (ML). Deep Learning, a methodology utilising DNNs, is inspired by the human brain’s structure. These networks identify patterns by training on vast datasets, honing their accuracy through trial and error, and are particularly adept at automating classification tasks. The neural networks consist of multiple layers of data processing units. While the initial data layers are often labelled by humans, the intermediate layers, known as ‘hidden layers’, help capture intricate patterns, allowing the automation of complex tasks with unprecedented efficiency.

At present, while AI and ML demonstrate promising capabilities in low-stake environments such as gaming and entertainment, the risks in these domains are less consequential compared to others. In high-stakes environments like healthcare, where decisions can have life-or-death implications, the role of ML models is both an opportunity and a challenge. For instance, cardiologists and endoscopists leveraging these decision-support tools must discern when a model’s insights are advantageous and when they might be misleading. Conducting a Cost-Benefit Analysis (CBA) would be substantially facilitated if we had a clearer understanding of the model’s decision-making process. A poignant example of algorithmic missteps is the Ofqual 2020 controversy, where an algorithm – not strictly an ML model – downgraded scores of students from less-advantaged schools in the United Kingdom. Similarly, concerns have been raised in areas like recruitment and legal decision-making. With the increasing integration of ML models in such high-stakes domains, ensuring transparency is paramount.

In recognition of these pressing concerns, a Responsible AI movement has taken root, advocating for increased transparency within the often opaque and shadowy realms of AI. This push for clarity holds significance for both users and developers. When humans err, they can be held accountable; however, when decisions emanate from machines, accountability becomes elusive.

In the scientific community, growing concerns about AI transparency have spurred investments in both Explainable AI (XAI) and Interpretable AI. While entities like Google Cloud might sometimes use these terms interchangeably, they are distinct concepts. Interpretability serves as a design standard, assisting developers in understanding the cause and effect within AI systems, addressing the ‘how’ and ‘why’ of model behaviours. Explainability, conversely, often caters to end-users, elucidating why a specific decision was taken, the role of particular nodes in hidden layers, and their significance to overall model performance. Within XAI, one can either develop inherently transparent systems – often termed ‘white boxes’ – or systems that offer post-hoc explanations for their outputs. These explanations can be visualised using tools like heat maps or saliency maps, highlighting the influence of individual data points on final results. With this visual explanation, end-users can see for themselves if the model is even using information that is relevant to the task at hand.

Addressing the Black Box conundrum of AI remains a challenge. Notable players, such as Google’s DeepMind, are spearheading efforts to enhance AI interpretability, especially in medical domains. Despite current limitations, the essence of XAI revolves around ensuring AI systems don’t provide the ‘right answers for the wrong reasons.’

On the political end, the Black Box nature of AI is also a reason why governments around the world are seeking to regulate its use. Explainability will help assess if companies are complying with regulations like the European Union’s AI Act and would help provide ‘meaningful information about the logic behind automated decisions’ as mandated under its General Data Protection Regulation. Regulations are often accused of slowing down development, but direction is perhaps more important than speed.

From a business perspective, there are concerns that the drive for XAI could hamper performance. However, AI researchers Rudin and Radin view the choice between accuracy and understandability to be ‘a false dichotomy’ and insist that ‘interpretable AI models can always be constructed’. Given the implications, Black Box models should not underpin high-stake decisions ‘unless no interpretable model can achieve the same level of accuracy’.

Ultimately, the Responsible AI movement represents a commitment to justice and transparency, aiming to mitigate risks while promoting accountability. In our rapidly evolving world, it is crucial we ensure technological advancements do not widen algorithmic biases. As socio-technological challenges surface across different real-world settings, understanding these concerns are not just the job of engineers, users should be aware as well. Think of explainability in AI as the seatbelt in a car: ‘not necessary for the car to perform, but offers insurance if things crash’

Bakhtawar Iftikhar is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. She can be reached at [email protected]

Design Credit: Mysha Dua Salman


Share this article

Facebook
Twitter
LinkedIn

Recent Publications

Browse through the list of recent publications.

The Cover-up: IAF Narrative of the May 2025 Air Battle

Even after one year since the India-Pakistan May war of 2025, the Indian discourse regarding Operation Sindoor remains uncertain under its pretence of restraint. The Pahalgam attack on 22 April, which killed 26 people, triggered an escalatory spiral. New Delhi quickly accused Pakistan-linked elements, while Islamabad refuted the allegation and demanded an independent investigation. On 7 May, India launched attacks deep inside Pakistan under what it later termed as Operation Sindoor. The political motive was intended to turn the crisis into coercive signalling by shifting the blame onto the enemy and projecting a sense of military superiority.
This episode, however, began to fray immediately as war seldom follows the intended script. Within minutes PAF shot down 7 IAF aircraft including 4 Rafales. On 8 May, Reuters reported that at least two Indian aircraft were shot down by a Pakistani J-10C, while the local government sources reported other aircraft crashes in Indian-occupied Jammu and Kashmir

Read More »

Why the IAF’s Post-Sindoor Spending Surge is a Sign of Panic

After Operation Sindoor, India is spending billions of dollars on new weapons. This is being taken by many people as an indication of military prowess. It is not. This rush to procure weapons is in fact an acknowledgement that the Air Force in India had failed to do what it was meant to do. The costly jets and missiles that India had purchased over the years failed to yield the promised results.

Sindoor was soon followed by India in sealing the gaps which the operation had exposed. It was reported that Indian Air Force (IAF) is looking to speed up its purchases of more than 7 billion USD. This will involve other Rafale fighter jets with India already ordering 26 more Rafales to the Navy in 2024 at an estimated cost of about 3.9 billion USD. India is also seeking long-range standoff missiles, Israeli loitering munitions and increased drone capabilities. Special financial powers of the Indian military were activated to issue emergency procurement orders. The magnitude and rate of these purchases speak volumes.

Indian media and defence analysts have over the years considered the Rafale as a game changer. When India purchased 36 Rafales aircrafts at an approximate cost of 8.7 billion USD, analysts vowed that the aircraft would provide India with air superiority over Pakistan. Operation Sindoor disproved all those allegations. Indian aircraft did not even fly in Pakistani airspace when the fighting started. India solely depended on standoff weapons that were launched at a safe distance. The air defence system of Pakistan, comprising of the HQ-9 surface-to-air missile system and its own fighters, stood its ground.

Read More »

May 2025: Mosaic Warfare and the Myth of Centralised Air Power

Visualise a modern-day Air Force commander sitting in the operations room, miles away from the combat zone, overseeing every friendly and enemy aircraft and all assets involved in the campaign. In a split second, he can task a fighter, reposition a drone, and authorise a strike. In today’s promising technological era, he does not even need an operations room; a laptop on his desktop will suffice. The situation looks promising as it offers efficiency, precision, and control. The term used for such operational control is ‘centralisation’, which has been made possible with advanced networking, integrating space, cyber, surveillance, artificial intelligence, and seamless communication, enabling a single commander to manage an entire campaign from a single node. Centralised command and control, championed by the Western air forces and then adopted by many others, has thus been seen as a pinnacle of modern military power.
The concept of centralisation, enabled by state-of-the-art networking, may seem promising, but it is nothing more than a myth.

Read More »