09. Sajal Shahid-Dig-Ter-AI-Oped thumbnail-July-2025-APP (1)


Share this article

Facebook
Twitter
LinkedIn

‘Future risks include terrorists leveraging AI for rapid application and website development;, though fundamentally, generative AI amplifies threats posed by existing technologies rather than creating entirely new threat categories.’ This recent warning from Adam Hadley, founder of a UN-supported threat intelligence organisation, reflects growing global concern over the weaponisation of Artificial Intelligence (AI).

This assertion confirms what many experts have feared over the years: swift adoption of emerging technology by non-state actors (NSAs). As advancements in technology accelerate, so does the threat of AI-enabled terrorism. From propaganda and recruitment to autonomous operations and terror financing, the potential use of machine intelligence by terrorist groups to support and bolster their activities is ever increasing, and necessitates urgent mitigation efforts.

While AI can be exploited in various ways, its most accessible use is amplifying propaganda campaigns. Terror groups are increasingly taking advantage of generative tools, such as OpenAI’s ChatGPT, to create tailored and aesthetically appealing content, meant to project their message to a wider audience. This can be seen in the Islamic State Khorasan Province (ISKP)’s dissemination of computer-generated news bulletin style videos following attacks, all of which are customised in accordance with the specific language and culture of their target regions. The same digitally synthesised media is employed by other militant groups, including the Tehreek-e-Taliban-Pakistan (TTP). This machine-generated material is then algorithmically amplified on social media through the use of AI-powered bots which analyse patterns to ensure the broadest dissemination possible.

AI’s ability to swiftly and effectively analyse patterns has the potential to be further misused for recruitment operations. Computationally, an intelligent system’s ability to rapidly collect and process large swathes of data could be exploited for the identification of potential recruits based on susceptibility to radicalisation messaging. This messaging could then be tailored in real time by continuously functioning chatbots, allowing for the customisation of mobilisation efforts based on individual responses and ideological profiles. The result is a fully automated, personalised recruitment mechanism.

AI-enabled terror financing is another aspect that poses a substantial threat. While traditional means of funding militant activities, such as the intermediary-based ‘Hawala system,’ are subject to specific countermeasures, newer forms of terror financing remain underexplored. Among them is the malicious use of virtual assets like cryptocurrency to bypass regulatory oversight and evade traceability. Another method that is being employed by groups such as the Islamic State of Iraq and Syria (ISIS) and al-Qaeda, which were found to possess over USD 1 million in cryptocurrency as of 2020, is automated trading bots.

In addition to this, the increasing sophistication of AI’s mimetic properties could also assist in the acquisition of illicit funds by leveraging computer-generated audio deepfakes for fraudulent purposes. This method has proven successful and has the potential to be abused on a larger scale in the future.

While the malicious use of AI is largely linked to the digital domain, it is by no means limited to it. Physical strikes such as traditional ramming attacks, could be augmented through the use of explosive-laden autonomous vehicles, eliminating the need for a human suicide bomber to achieve the intended effect. Similarly, civilian autonomous vehicles could be remotely hijacked and sent off course to create devastating damage, once again without the need of a human operative. The use of autonomous drones by extremist groups similarly poses a considerable threat. Drones, often in the form of explosives bearing quadcopters, have already been extensively employed by NSAs including ISIS and Hezbollah. While their usage has been largely limited to pilot-to-target operations, introduction of AI could widen their operational scope to include synchronised swarm attacks meant to overwhelm a target’s defences.

For a state like Pakistan that continues to face the complex and enduring challenge of terrorism, the potential adoption of AI for malicious purposes poses a direct and urgent threat. To effectively counter this possibility, comprehensive frameworks and mechanisms need to be developed that address these issues at all levels, from radicalisation and recruitment to fundraising and operational purposes. A core approach towards mitigating this threat could be to adopt a similar model to Germany’s ‘Monitoring System and Transfer Platform Radicalisation’ (MOTRA), which employs AI tools to detect early signs of radicalisation. This real-time information on emerging radicalisation trends combined with generative tools can be utilised to disseminate tailored counter messaging to susceptible audiences, disrupting militant groups’ access to a vital human resource.

Existing detection mechanisms can be further strengthened to monitor and accurately identify visual and audio anomalies in online media to prevent the malicious use of deepfake technology. Similar systems can be employed to recognise and highlight discrepancies in crypto trading accounts as a means of disrupting digitally-enhanced terror financing. Furthermore, the operational misuse of computerised intelligence can be disrupted by improving the overall resilience of commercial autonomous vehicles to prevent cyber intrusions, alongside the implementation of machine learning (ML) techniques to track and identify drones during flight. 

Ultimately, countering the malicious use of AI by violent actors requires not just technical safeguards, but a coordinated effort to identify, anticipate, and disrupt the threats emerging from this rapidly evolving technology.

Sajal Shahid is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. She can be reached at [email protected]

Recent Publications

Browse through the list of recent publications.

Golden Dome: Capabilities and Constraints

In an era of novel threats, a layered defensive shield is once again at the centre of US strategy. The announcement of the Golden Dome by President Trump shortly after assuming office has given rise to new expectations, questions, and concerns regarding the project.
The capability is envisioned as a comprehensive missile shield for the continental United States (CONUS) against ballistic missiles, hypersonic vehicles, cruise missiles, and UAVs. Conceived as a multi-tiered system, it aims to integrate existing missile defences with new space-based platforms. The layered system, combining land, sea and space-based sensors

Read More »

Trump’s Coercive Diplomacy: America’s Harder Turn

President Donald Trump renamed the Department of Defense (DOD) to the Department of War in September 2025. Then, just a month later, he threatened at least three countries with war. Trump’s economic war was waged on most states, in the form of tariffs, from the day he assumed office, but the threats and signalling toward an armed confrontation have been growing more frequent and explicit.

Read More »

Do India- Bangladesh Relations Signal a New Strategic Front?

Amidst transforming regional security dynamics, India reinforced its eastern flank by establishing three fully operational military stations at strategic points around the ‘Siliguri Corridor’ near the India-Bangladesh border. The new bases include the Lachit Borphukan Military Station near Dhubri in Assam along with two forward bases at Chopra in West Bengal and Kishanganj in Bihar. Indian Army also reviews a fourth station in Mizoram as part of extended defence arc around the Siliguri corridor. Amidst deteriorating ties with Bangladesh, India’s fortification of its eastern

Read More »