Inside Shadows of AI-Bakhtawar-MDS

Share this article

Facebook
Twitter
LinkedIn

In Artificial Intelligence, the inability to fully comprehend the inner workings of a Deep Neural Network (DNN) or deep learning systems is called the ‘Black Box Problem.’ Being impenetrable and opaque, the AI Black Box does not explain how exactly artificial neurons transmit signals to transform input into output. Consequently, experts in Natural Language Processing (NLP) and computational linguistics, like Sam Bowman, grapple with the intricate complexities of these systems, often likened to ‘millions of numbers moving around a few hundred times a second’. While this opacity challenges our trust and underscores potential risks, the solution lies not in abandoning AI models but in striving to understand them better.

Among the diverse AI capabilities being harnessed in the 21st Century, Deep Neural Networks (DNNs) stand out as a subset of Machine Learning (ML). Deep Learning, a methodology utilising DNNs, is inspired by the human brain’s structure. These networks identify patterns by training on vast datasets, honing their accuracy through trial and error, and are particularly adept at automating classification tasks. The neural networks consist of multiple layers of data processing units. While the initial data layers are often labelled by humans, the intermediate layers, known as ‘hidden layers’, help capture intricate patterns, allowing the automation of complex tasks with unprecedented efficiency.

At present, while AI and ML demonstrate promising capabilities in low-stake environments such as gaming and entertainment, the risks in these domains are less consequential compared to others. In high-stakes environments like healthcare, where decisions can have life-or-death implications, the role of ML models is both an opportunity and a challenge. For instance, cardiologists and endoscopists leveraging these decision-support tools must discern when a model’s insights are advantageous and when they might be misleading. Conducting a Cost-Benefit Analysis (CBA) would be substantially facilitated if we had a clearer understanding of the model’s decision-making process. A poignant example of algorithmic missteps is the Ofqual 2020 controversy, where an algorithm – not strictly an ML model – downgraded scores of students from less-advantaged schools in the United Kingdom. Similarly, concerns have been raised in areas like recruitment and legal decision-making. With the increasing integration of ML models in such high-stakes domains, ensuring transparency is paramount.

In recognition of these pressing concerns, a Responsible AI movement has taken root, advocating for increased transparency within the often opaque and shadowy realms of AI. This push for clarity holds significance for both users and developers. When humans err, they can be held accountable; however, when decisions emanate from machines, accountability becomes elusive.

In the scientific community, growing concerns about AI transparency have spurred investments in both Explainable AI (XAI) and Interpretable AI. While entities like Google Cloud might sometimes use these terms interchangeably, they are distinct concepts. Interpretability serves as a design standard, assisting developers in understanding the cause and effect within AI systems, addressing the ‘how’ and ‘why’ of model behaviours. Explainability, conversely, often caters to end-users, elucidating why a specific decision was taken, the role of particular nodes in hidden layers, and their significance to overall model performance. Within XAI, one can either develop inherently transparent systems – often termed ‘white boxes’ – or systems that offer post-hoc explanations for their outputs. These explanations can be visualised using tools like heat maps or saliency maps, highlighting the influence of individual data points on final results. With this visual explanation, end-users can see for themselves if the model is even using information that is relevant to the task at hand.

Addressing the Black Box conundrum of AI remains a challenge. Notable players, such as Google’s DeepMind, are spearheading efforts to enhance AI interpretability, especially in medical domains. Despite current limitations, the essence of XAI revolves around ensuring AI systems don’t provide the ‘right answers for the wrong reasons.

On the political end, the Black Box nature of AI is also a reason why governments around the world are seeking to regulate its use. Explainability will help assess if companies are complying with regulations like the European Union’s AI Act and would help provide ‘meaningful information about the logic behind automated decisions’ as mandated under its General Data Protection Regulation. Regulations are often accused of slowing down development, but direction is perhaps more important than speed.

From a business perspective, there are concerns that the drive for XAI could hamper performance. However, AI researchers Rudin and Radin view the choice between accuracy and understandability to be ‘a false dichotomy’ and insist that ‘interpretable AI models can always be constructed’. Given the implications, Black Box models should not underpin high-stake decisions ‘unless no interpretable model can achieve the same level of accuracy’.

Ultimately, the Responsible AI movement represents a commitment to justice and transparency, aiming to mitigate risks while promoting accountability. In our rapidly evolving world, it is crucial we ensure technological advancements do not widen algorithmic biases. As socio-technological challenges surface across different real-world settings, understanding these concerns are not just the job of engineers, users should be aware as well. Think of explainability in AI as the seatbelt in a car: ‘not necessary for the car to perform, but offers insurance if things crash’

Bakhtawar Iftikhar is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. She can be reached at cass.thinkers@casstt.com

Design Credit: Mysha Dua Salman

Recent Publications

Browse through the list of recent publications.

Potential of Pakistan’s IT Industry: A SWOT Analysis

In recent years, Pakistan’s IT industry has shown significant potential for growth while confronting various challenges. This Working Paper presentsa comprehensive SWOT analysis to assess the industry’s Strengths, Weaknesses, Opportunities, and Threats in detail. It identifies a young demographic base; large freelancing sector; and financial attractiveness for offshore outsourcing of IT services as the major strengths of Pakistan’s IT industry.

18 views

Read More »

Israel-Iran Standoff and Global Oil Prices

The war on Gaza since early October last year has had a limited impact on global oil prices, unlike the spike in oil prices that followed the war in Ukraine, as neither Israel nor the besieged Gaza Strip are significant oil producers. 

For context, global Brent crude oil prices increased briefly after the initial violence in early October

37 views

Read More »

Stay Connected

Follow and Subscribe

Join Our Newsletter
And get notified everytime we publish new content.

© 2022 CASSTT ALL RIGHTS RESERVED

Developed By Team CASSTT

Contact CASS

CASS (Centre for Aerospace & Security Studies), Old Airport Road, Islamabad
+92 51 5405011
cass.thinkers@casstt.com
career@casstt.com

All views and opinions expressed or implied are those of the authors/speakers/internal and external scholars and should not be construed as carrying the official sanction of CASS.