Inside Shadows of AI-Bakhtawar-MDS


Share this article

Facebook
Twitter
LinkedIn

In Artificial Intelligence, the inability to fully comprehend the inner workings of a Deep Neural Network (DNN) or deep learning systems is called the ‘Black Box Problem.’ Being impenetrable and opaque, the AI Black Box does not explain how exactly artificial neurons transmit signals to transform input into output. Consequently, experts in Natural Language Processing (NLP) and computational linguistics, like Sam Bowman, grapple with the intricate complexities of these systems, often likened to ‘millions of numbers moving around a few hundred times a second’. While this opacity challenges our trust and underscores potential risks, the solution lies not in abandoning AI models but in striving to understand them better.

Among the diverse AI capabilities being harnessed in the 21st Century, Deep Neural Networks (DNNs) stand out as a subset of Machine Learning (ML). Deep Learning, a methodology utilising DNNs, is inspired by the human brain’s structure. These networks identify patterns by training on vast datasets, honing their accuracy through trial and error, and are particularly adept at automating classification tasks. The neural networks consist of multiple layers of data processing units. While the initial data layers are often labelled by humans, the intermediate layers, known as ‘hidden layers’, help capture intricate patterns, allowing the automation of complex tasks with unprecedented efficiency.

At present, while AI and ML demonstrate promising capabilities in low-stake environments such as gaming and entertainment, the risks in these domains are less consequential compared to others. In high-stakes environments like healthcare, where decisions can have life-or-death implications, the role of ML models is both an opportunity and a challenge. For instance, cardiologists and endoscopists leveraging these decision-support tools must discern when a model’s insights are advantageous and when they might be misleading. Conducting a Cost-Benefit Analysis (CBA) would be substantially facilitated if we had a clearer understanding of the model’s decision-making process. A poignant example of algorithmic missteps is the Ofqual 2020 controversy, where an algorithm – not strictly an ML model – downgraded scores of students from less-advantaged schools in the United Kingdom. Similarly, concerns have been raised in areas like recruitment and legal decision-making. With the increasing integration of ML models in such high-stakes domains, ensuring transparency is paramount.

In recognition of these pressing concerns, a Responsible AI movement has taken root, advocating for increased transparency within the often opaque and shadowy realms of AI. This push for clarity holds significance for both users and developers. When humans err, they can be held accountable; however, when decisions emanate from machines, accountability becomes elusive.

In the scientific community, growing concerns about AI transparency have spurred investments in both Explainable AI (XAI) and Interpretable AI. While entities like Google Cloud might sometimes use these terms interchangeably, they are distinct concepts. Interpretability serves as a design standard, assisting developers in understanding the cause and effect within AI systems, addressing the ‘how’ and ‘why’ of model behaviours. Explainability, conversely, often caters to end-users, elucidating why a specific decision was taken, the role of particular nodes in hidden layers, and their significance to overall model performance. Within XAI, one can either develop inherently transparent systems – often termed ‘white boxes’ – or systems that offer post-hoc explanations for their outputs. These explanations can be visualised using tools like heat maps or saliency maps, highlighting the influence of individual data points on final results. With this visual explanation, end-users can see for themselves if the model is even using information that is relevant to the task at hand.

Addressing the Black Box conundrum of AI remains a challenge. Notable players, such as Google’s DeepMind, are spearheading efforts to enhance AI interpretability, especially in medical domains. Despite current limitations, the essence of XAI revolves around ensuring AI systems don’t provide the ‘right answers for the wrong reasons.

On the political end, the Black Box nature of AI is also a reason why governments around the world are seeking to regulate its use. Explainability will help assess if companies are complying with regulations like the European Union’s AI Act and would help provide ‘meaningful information about the logic behind automated decisions’ as mandated under its General Data Protection Regulation. Regulations are often accused of slowing down development, but direction is perhaps more important than speed.

From a business perspective, there are concerns that the drive for XAI could hamper performance. However, AI researchers Rudin and Radin view the choice between accuracy and understandability to be ‘a false dichotomy’ and insist that ‘interpretable AI models can always be constructed’. Given the implications, Black Box models should not underpin high-stake decisions ‘unless no interpretable model can achieve the same level of accuracy’.

Ultimately, the Responsible AI movement represents a commitment to justice and transparency, aiming to mitigate risks while promoting accountability. In our rapidly evolving world, it is crucial we ensure technological advancements do not widen algorithmic biases. As socio-technological challenges surface across different real-world settings, understanding these concerns are not just the job of engineers, users should be aware as well. Think of explainability in AI as the seatbelt in a car: ‘not necessary for the car to perform, but offers insurance if things crash’

Bakhtawar Iftikhar is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. She can be reached at cass.thinkers@casstt.com

Design Credit: Mysha Dua Salman

Recent Publications

Browse through the list of recent publications.

9 Hypotheses

Having a conceptual framework in mind allows one to apply existing knowledge to unfamiliar scenarios, adjusting the framework as new data comes to light. Since

Read More »