12. Shaza Arif - Understanding Adversarial AI The Military lens-Oped thumbnail-December-2025-APP (1)

Modern warfare is characterised by the race to compress the Observe-Orient-Decide-Act (OODA) loop. Militaries that can process information faster will undoubtedly have an edge in future conflicts. In this context, machine learning (ML), a prominent subset of Artificial Intelligence, has significant potential to transform the battlefield. In one of my own articles, I have explained the unprecedented efficiency that AI promises across different domains, ranging  from intelligence surveillance and reconnaissance (ISR), autonomous systems, planning and training, logistics and predictive maintenance to offence (for further details, please refer to the full research article). However, this leverage comes with undercurrents of concern. One of the major challenges in this regard is adversarial attacks—a field associated with attacking ML models and ML data.

Adversarial attacks impact the core logic that fuels ML.  Attack techniques include tampering with either training data or ML models used in various applications to adversely impact the functioning of ML or alter its output. As a result, it undermines the very advantage that militaries aim to gain from AI. The target entity can be data-capturing sensors, communication links or data storing and labelling points. The intended objective can be achieved via different measures, including poisoning attacks, evasion attacks, and/or extraction attacks. Poisoning attacks take place during the training phase of machine learning by injecting malicious data, leading to incorrect pattern recognition by ML. In contrast, evasion attacks occur during the testing phase, where incorrect input is inserted to manipulate the ML model without altering the training data. Likewise, extraction attacks are carried out through repeated queries to obtain sensitive information. This vulnerability of being manipulated   easily be weaponised by state and non-state actors alike to impede, blind or misdirect military systems.

In this context, adversarial AI can impair various military applications, creating new challenges for decision makers across all domains. For instance, the battlefield awareness of land forces can be undermined, leading to misdirected strikes. In the aerial domain, evasion attacks can impact the functioning of the radars. In the maritime domain, the sonar classification models can be attacked via poisoning attacks, impairing the classification ability vis-à-vis friendly or hostile ships. Adversarial noise shaping can also lure underwater autonomous systems into ghost channels. Even pixel-level perturbation can cause a significant impact in numerous applications.

, Adversarial attacks can impact mission effectiveness and can also lead to fratricide.  Data that has been tampered with also impairs channels where multisource fusion is employed. Likewise, while direct attacks certainly remain a challenge, the impairment of human situational awareness through coordinated adversarial attacks poses a towering challenge. In addition, operational logistics can be jeopardised across all branches.  Such actions can lead to the distortion or misallocation of logistic priorities during critical periods. Together, these vulnerabilities can compromise operational decision-making.

The impact of adversarial AI can also extend into strategic decision-making. Australia has embraced a national defence strategy grounded in deterrence by denial. Deterrence is, at its core, a strategy based on perceptions of relative advantage and the likelihood of success. It relies on keenly balanced judgments over objectives, thresholds, risk calculations and intentions. Adversarial AI can erode the reliability of the information on which these judgments and decisions are based. Spoofed radars, misclassified ISR, and jeopardised communication channels will lead to misinterpretations. Any decision or action triggered by corrupted data can result in unintended escalation. These circumstances amplify the probability of escalation not by intent but by maticiously injected error—a dangerous preposition for future warfare. And given the complex nature of AI-enabled threat environment, it becomes difficult to predict the occurrence of such attacks. Likewise, the exact response to adversarial attacks also remains to be deliberated. The absence of guardrails in the form of international regulations compounds the challenges, leaving a major lacuna and associated risks.

Recent conflicts, notably the ongoing Russia-Ukraine conflict, the recent India-Pakistan May standoff, and the Iran-Israel conflict, have demonstrated the use of emerging technologies in contemporary conflicts. Adversarial AI, if weaponised, will further complicate these regional and geopolitical flashpoints

Regarding the way forward, Explainable AI (XAI) has been one of the most discussed remedial measures to address the dangers of adverserial AI. It is essential to note that while XAI can help find spurious correlations and identify shifts, it does not provide robustness of AI models—it only makes them more transparent.  Keeping human-in-the-loop is one of the primary and effective ways to mitigate the threat. However, this solution comes with an associated challenge of performance compromise. An AI-enabled decision support system only offers potential advantage if commanders can harness the speed of its processing power. Placing a human back in the loop risks denying these very advantages. Likewise, there will always be a risk of some applications being more vulnerable. Under certain circumstances, even AI-human teaming might not work as required, such as in ISR, given limited human involvement.

It has become imperative for militaries to incorporate adversarial AI into war games and simulations to enhance their preparedness. In addition, joint service protocols can play an effective role in this regard. It is important to rely on heterogeneous and independent modalities so that an attack on data/model in one channel does not disable the entire system. The increasing frequency of adversarial attacks may convince adversaries to develop additional confidence-building measures (CBMs), particularly among hostile nations, to communicate anomalous behaviour in a timely way.

In the end, the race is not just to harness AI on the battlefield but also to defend it. If left unchecked, adversarial attacks can blind or mislead enemy forces—inadvertently acting as a catalyst of confusion and uncertainty rather than shortening the OODA loop.  Hence, future AI will remain highly dependent not only on fielding advanced algorithms but also on safeguarding them against potential manipulation. Failure to build this resilience can unleash unprecedented challenges.

Shaza Arif is a Senior Research Associate at the Centre for Aerospace & Security Studies (CASS), Islamabad. The Article was first published in The Forge, Australia. She can be reached at [email protected].


Share this article

Facebook
Twitter
LinkedIn

Recent Publications

Browse through the list of recent publications.

The Cover-up: IAF Narrative of the May 2025 Air Battle

Even after one year since the India-Pakistan May war of 2025, the Indian discourse regarding Operation Sindoor remains uncertain under its pretence of restraint. The Pahalgam attack on 22 April, which killed 26 people, triggered an escalatory spiral. New Delhi quickly accused Pakistan-linked elements, while Islamabad refuted the allegation and demanded an independent investigation. On 7 May, India launched attacks deep inside Pakistan under what it later termed as Operation Sindoor. The political motive was intended to turn the crisis into coercive signalling by shifting the blame onto the enemy and projecting a sense of military superiority.
This episode, however, began to fray immediately as war seldom follows the intended script. Within minutes PAF shot down 7 IAF aircraft including 4 Rafales. On 8 May, Reuters reported that at least two Indian aircraft were shot down by a Pakistani J-10C, while the local government sources reported other aircraft crashes in Indian-occupied Jammu and Kashmir

Read More »

Why the IAF’s Post-Sindoor Spending Surge is a Sign of Panic

After Operation Sindoor, India is spending billions of dollars on new weapons. This is being taken by many people as an indication of military prowess. It is not. This rush to procure weapons is in fact an acknowledgement that the Air Force in India had failed to do what it was meant to do. The costly jets and missiles that India had purchased over the years failed to yield the promised results.

Sindoor was soon followed by India in sealing the gaps which the operation had exposed. It was reported that Indian Air Force (IAF) is looking to speed up its purchases of more than 7 billion USD. This will involve other Rafale fighter jets with India already ordering 26 more Rafales to the Navy in 2024 at an estimated cost of about 3.9 billion USD. India is also seeking long-range standoff missiles, Israeli loitering munitions and increased drone capabilities. Special financial powers of the Indian military were activated to issue emergency procurement orders. The magnitude and rate of these purchases speak volumes.

Indian media and defence analysts have over the years considered the Rafale as a game changer. When India purchased 36 Rafales aircrafts at an approximate cost of 8.7 billion USD, analysts vowed that the aircraft would provide India with air superiority over Pakistan. Operation Sindoor disproved all those allegations. Indian aircraft did not even fly in Pakistani airspace when the fighting started. India solely depended on standoff weapons that were launched at a safe distance. The air defence system of Pakistan, comprising of the HQ-9 surface-to-air missile system and its own fighters, stood its ground.

Read More »

May 2025: Mosaic Warfare and the Myth of Centralised Air Power

Visualise a modern-day Air Force commander sitting in the operations room, miles away from the combat zone, overseeing every friendly and enemy aircraft and all assets involved in the campaign. In a split second, he can task a fighter, reposition a drone, and authorise a strike. In today’s promising technological era, he does not even need an operations room; a laptop on his desktop will suffice. The situation looks promising as it offers efficiency, precision, and control. The term used for such operational control is ‘centralisation’, which has been made possible with advanced networking, integrating space, cyber, surveillance, artificial intelligence, and seamless communication, enabling a single commander to manage an entire campaign from a single node. Centralised command and control, championed by the Western air forces and then adopted by many others, has thus been seen as a pinnacle of modern military power.
The concept of centralisation, enabled by state-of-the-art networking, may seem promising, but it is nothing more than a myth.

Read More »