Bakhtawar Iftikhar-AI War-MDS-JPG


Share this article

Facebook
Twitter
LinkedIn

As militaries around the world contemplate the use of Artificial Intelligence (AI) in warfare, the onslaught of Israel in the Gaza Strip offers an important point of reflection. This article explores the AI system called ‘Gospel’ or ‘Habsorah’, pioneered by Unit 8200 – an Intelligence Corps Unit – of the Israeli Defense Forces (IDF). It asserts that instead of being mesmerised by technological advancements and hastily deploying them, it is crucial to assess the potential pitfalls involved.

While the use of AI in conflict was first demonstrated in 2021 during Operation Guardian of the Walls, more information regarding Gospel has come to light in the recent conflict. Gospel is a target-generation system based on Machine Learning (ML), using Big Data which combines a myriad of information derived from ‘human intelligence (HUMINT), signal intelligence (SIGINT), visual intelligence (VISINT), geographical intelligence (GEOINT)’ etc. According to Blaise Misztal, Vice President for Policy at an institute that facilitates US-Israel military cooperation, this would include ‘cell phone messages, drone footage, satellite imagery and seismic sensors’.

Subsequently, the data is used to supply a list of suggested targets for the Military Intelligence’s research division. The list holds traces of alleged ‘operatives’ or infrastructure affiliated with Hamas or Islamic Jihad. The human commander or analyst then decides whether or not to act upon this intel. Since Gospel operates faster than a team of intelligence officers, the military has accelerated the rate at which targets are produced. Now, it can generate 100 targets per day, while it could only target some 50 per year in the past thus, becoming an infamous ‘mass assassination factory’.

Gospel is controversial for several reasons. To begin with, algorithms are ‘notoriously flawed’. The AI-generated suggestions could either be completely oblivious to nuances where a human officer would show restraint, or fraught with biases that the human officer would not be cognizant of. Either way, the BlackBox nature of AI is an inherent problem, which undermines transparency and is reason enough not to take AI’s word as gospel – let alone act upon it to attack densely populated urban settings as IDF have.

Furthermore, and worse still, even if the AI-generated list was perfect, innocent civilians are deliberately targeted by some ‘trigger-happy’ commanders in charge of the review. Even the IDF’s Air Force chief openly claimed that the military’s approach is not ‘surgical’. This deliberate striking of public buildings and private residences as ‘power targets’ or ‘matarot otzem’ to create shock is particularly alarming, especially since the Israeli military prides itself on ‘precision’ and technological prowess. The incessant bombing has disproportionately killed over 20,000 and displaced over 1.9 million people. Hence, instead of merely questioning the efficiency of Habsora, one should also question the Israeli military’s internal protocol that conveniently neglects humanitarian obligations, and uses AI as a ‘technological cover’ for its genocidal campaign.

Additionally, training data comprising these ‘target banks’ evokes parallels with Israeli surveillance practices. The market value of Israeli weaponry is built on the dismal commodification of Palestinian lives, who are surveilled enmasse and then treated as lab rats in a Palestinian Laboratory, to produce ‘battle-tested’ technology. Overall, this dehumanisation of innocent Palestinians has significantly tarnished Israel’s reputation, increasing the price of what is being termed as a ‘victory’.

Thus, Israel’s Gospel is a vivid yet abhorrent glimpse into the dark side of AI-enabled warfare, for it will always be associated with one of the most brutal military campaigns in history. In fact, the reality is that responsibility cannot be attributed to AI since the onus lies on humans hastily employing erroneous systems without due diligence and with willful ignorance. Therefore, just as world leaders must call for a permanent ceasefire in Gaza, and hold Israel accountable for war crimes at this critical time, they must also expedite the development of global AI legislation and safeguards, with all important actors on board. Such regulations should also entail clauses that prevent the spread of surveillance culture for military use. Until usage protocols, efficiency and fairness of AI systems are not transparently established, it can be catastrophic to employ imperfect and opaque systems in high-stakes environments like war. Moreover, regulations could help ensure that even near-perfect AI in the future is not weaponised to commit gross atrocities.

In the same vein, state militaries should ensure that the fog of war does not cloud their judgement; AI systems should not be integrated with militaries at the cost of their humanitarian obligations, but rather to ensure compliance by minimising civilian harm. The aim is not to stigmatise technology, but to prevent large-scale human suffering like that of the Palestinians. To conclude, technological advancements indeed reflect the remarkable human ability to innovate, but it is also up to us to exercise responsibility and prevent AI from becoming an oppressive automaton.

Bakhtawar Iftikhar is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. She can be reached at cass.thinkers@casstt.com.

Design Credit:  Mysha Dua Salman

Recent Publications

Browse through the list of recent publications.