09. Sajal Shahid-Dig-Ter-AI-Oped thumbnail-July-2025-APP (1)

‘Future risks include terrorists leveraging AI for rapid application and website development;, though fundamentally, generative AI amplifies threats posed by existing technologies rather than creating entirely new threat categories.’ This recent warning from Adam Hadley, founder of a UN-supported threat intelligence organisation, reflects growing global concern over the weaponisation of Artificial Intelligence (AI).

This assertion confirms what many experts have feared over the years: swift adoption of emerging technology by non-state actors (NSAs). As advancements in technology accelerate, so does the threat of AI-enabled terrorism. From propaganda and recruitment to autonomous operations and terror financing, the potential use of machine intelligence by terrorist groups to support and bolster their activities is ever increasing, and necessitates urgent mitigation efforts.

While AI can be exploited in various ways, its most accessible use is amplifying propaganda campaigns. Terror groups are increasingly taking advantage of generative tools, such as OpenAI’s ChatGPT, to create tailored and aesthetically appealing content, meant to project their message to a wider audience. This can be seen in the Islamic State Khorasan Province (ISKP)’s dissemination of computer-generated news bulletin style videos following attacks, all of which are customised in accordance with the specific language and culture of their target regions. The same digitally synthesised media is employed by other militant groups, including the Tehreek-e-Taliban-Pakistan (TTP). This machine-generated material is then algorithmically amplified on social media through the use of AI-powered bots which analyse patterns to ensure the broadest dissemination possible.

AI’s ability to swiftly and effectively analyse patterns has the potential to be further misused for recruitment operations. Computationally, an intelligent system’s ability to rapidly collect and process large swathes of data could be exploited for the identification of potential recruits based on susceptibility to radicalisation messaging. This messaging could then be tailored in real time by continuously functioning chatbots, allowing for the customisation of mobilisation efforts based on individual responses and ideological profiles. The result is a fully automated, personalised recruitment mechanism.

AI-enabled terror financing is another aspect that poses a substantial threat. While traditional means of funding militant activities, such as the intermediary-based ‘Hawala system,’ are subject to specific countermeasures, newer forms of terror financing remain underexplored. Among them is the malicious use of virtual assets like cryptocurrency to bypass regulatory oversight and evade traceability. Another method that is being employed by groups such as the Islamic State of Iraq and Syria (ISIS) and al-Qaeda, which were found to possess over USD 1 million in cryptocurrency as of 2020, is automated trading bots.

In addition to this, the increasing sophistication of AI’s mimetic properties could also assist in the acquisition of illicit funds by leveraging computer-generated audio deepfakes for fraudulent purposes. This method has proven successful and has the potential to be abused on a larger scale in the future.

While the malicious use of AI is largely linked to the digital domain, it is by no means limited to it. Physical strikes such as traditional ramming attacks, could be augmented through the use of explosive-laden autonomous vehicles, eliminating the need for a human suicide bomber to achieve the intended effect. Similarly, civilian autonomous vehicles could be remotely hijacked and sent off course to create devastating damage, once again without the need of a human operative. The use of autonomous drones by extremist groups similarly poses a considerable threat. Drones, often in the form of explosives bearing quadcopters, have already been extensively employed by NSAs including ISIS and Hezbollah. While their usage has been largely limited to pilot-to-target operations, introduction of AI could widen their operational scope to include synchronised swarm attacks meant to overwhelm a target’s defences.

For a state like Pakistan that continues to face the complex and enduring challenge of terrorism, the potential adoption of AI for malicious purposes poses a direct and urgent threat. To effectively counter this possibility, comprehensive frameworks and mechanisms need to be developed that address these issues at all levels, from radicalisation and recruitment to fundraising and operational purposes. A core approach towards mitigating this threat could be to adopt a similar model to Germany’s ‘Monitoring System and Transfer Platform Radicalisation’ (MOTRA), which employs AI tools to detect early signs of radicalisation. This real-time information on emerging radicalisation trends combined with generative tools can be utilised to disseminate tailored counter messaging to susceptible audiences, disrupting militant groups’ access to a vital human resource.

Existing detection mechanisms can be further strengthened to monitor and accurately identify visual and audio anomalies in online media to prevent the malicious use of deepfake technology. Similar systems can be employed to recognise and highlight discrepancies in crypto trading accounts as a means of disrupting digitally-enhanced terror financing. Furthermore, the operational misuse of computerised intelligence can be disrupted by improving the overall resilience of commercial autonomous vehicles to prevent cyber intrusions, alongside the implementation of machine learning (ML) techniques to track and identify drones during flight. 

Ultimately, countering the malicious use of AI by violent actors requires not just technical safeguards, but a coordinated effort to identify, anticipate, and disrupt the threats emerging from this rapidly evolving technology.

Sajal Shahid is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. She can be reached at [email protected]


Share this article

Facebook
Twitter
LinkedIn

Recent Publications

Browse through the list of recent publications.

The Cover-up: IAF Narrative of the May 2025 Air Battle

Even after one year since the India-Pakistan May war of 2025, the Indian discourse regarding Operation Sindoor remains uncertain under its pretence of restraint. The Pahalgam attack on 22 April, which killed 26 people, triggered an escalatory spiral. New Delhi quickly accused Pakistan-linked elements, while Islamabad refuted the allegation and demanded an independent investigation. On 7 May, India launched attacks deep inside Pakistan under what it later termed as Operation Sindoor. The political motive was intended to turn the crisis into coercive signalling by shifting the blame onto the enemy and projecting a sense of military superiority.
This episode, however, began to fray immediately as war seldom follows the intended script. Within minutes PAF shot down 7 IAF aircraft including 4 Rafales. On 8 May, Reuters reported that at least two Indian aircraft were shot down by a Pakistani J-10C, while the local government sources reported other aircraft crashes in Indian-occupied Jammu and Kashmir

Read More »

Why the IAF’s Post-Sindoor Spending Surge is a Sign of Panic

After Operation Sindoor, India is spending billions of dollars on new weapons. This is being taken by many people as an indication of military prowess. It is not. This rush to procure weapons is in fact an acknowledgement that the Air Force in India had failed to do what it was meant to do. The costly jets and missiles that India had purchased over the years failed to yield the promised results.

Sindoor was soon followed by India in sealing the gaps which the operation had exposed. It was reported that Indian Air Force (IAF) is looking to speed up its purchases of more than 7 billion USD. This will involve other Rafale fighter jets with India already ordering 26 more Rafales to the Navy in 2024 at an estimated cost of about 3.9 billion USD. India is also seeking long-range standoff missiles, Israeli loitering munitions and increased drone capabilities. Special financial powers of the Indian military were activated to issue emergency procurement orders. The magnitude and rate of these purchases speak volumes.

Indian media and defence analysts have over the years considered the Rafale as a game changer. When India purchased 36 Rafales aircrafts at an approximate cost of 8.7 billion USD, analysts vowed that the aircraft would provide India with air superiority over Pakistan. Operation Sindoor disproved all those allegations. Indian aircraft did not even fly in Pakistani airspace when the fighting started. India solely depended on standoff weapons that were launched at a safe distance. The air defence system of Pakistan, comprising of the HQ-9 surface-to-air missile system and its own fighters, stood its ground.

Read More »

May 2025: Mosaic Warfare and the Myth of Centralised Air Power

Visualise a modern-day Air Force commander sitting in the operations room, miles away from the combat zone, overseeing every friendly and enemy aircraft and all assets involved in the campaign. In a split second, he can task a fighter, reposition a drone, and authorise a strike. In today’s promising technological era, he does not even need an operations room; a laptop on his desktop will suffice. The situation looks promising as it offers efficiency, precision, and control. The term used for such operational control is ‘centralisation’, which has been made possible with advanced networking, integrating space, cyber, surveillance, artificial intelligence, and seamless communication, enabling a single commander to manage an entire campaign from a single node. Centralised command and control, championed by the Western air forces and then adopted by many others, has thus been seen as a pinnacle of modern military power.
The concept of centralisation, enabled by state-of-the-art networking, may seem promising, but it is nothing more than a myth.

Read More »