AI Maheen Article Thematic Web Image

Share this article

Listen to this article

Over the years, there have been significant advancements in the militarization of artificial intelligence. To a limited extent we are already witnessing these innovations on the battlefield in Ukraine and elsewhere.

Several types of AI applications such as autonomous drones, electronic warfare systems, target recognition, and the use of AI for logistics and transport have already made their way into modern militaries. However, unregulated research AI applications could jeopardize meaningful human control and oversight of the battlefield. AI applications have not yet fully matured and could have a number of ethical, moral, and legal consequences that necessitate greater oversight. To address these issues, measures have been taken by the United States, NATO, and China to implement ethical processes and guidelines in the conduct and deployment of AI technologies.

In 2020, the Pentagon adopted five ethical principles for AI that cover responsibility, equitability, traceability, reliability, and governability. These series of ethical principles are for the deployment of AI for both combat and non-combat functions. Additionally, these principles assist and guide the U.S. military in upholding ethical, legal, and policy commitments in the domain of AI. These AI principles aim to ensure U.S. leadership in AI for years to come.

In 2021, NATO released six principles for the use of AI in defense and security that are based on lawfulness, accountability, traceability, reliability, governability, and bias mitigation. These principles are aimed at aligning common values and an international commitment of allied nations to abide by international law and ensure interoperability. The formal adoption of an AI strategy will ensure the necessary collaboration between transatlantic allies to meet defense and security challenges. As NATO has been at an early stage of the research and development of AI, the focus appears to be on the desire to nurture an innovative AI ecosystem and reduce reliance on traditional capacity development mechanisms.

While China has not formally published ethics for artificial intelligence, it has published its first position paper on regulating military applications of artificial intelligence. It calls for the establishment of an “effective control regime” for “military applications of AI,” and that countries should follow the principle of “AI for good.” It asserts that AI applications should be under “relevant human control.” However, the definition of “relevant human control” remains vague. For the governance of artificial intelligence, it stresses international cooperation and the development of a broader consensus on the formulation of a “universal international regime.”

The above governance principles of the U.S., NATO, and China address different priorities. U.S. and NATO principles have been developed for cooperation among allies while strengthening international competitiveness. Whereas China primarily focuses on the development of artificial intelligence for assisting developing countries in strengthening their governance.

It has been acknowledged that China has bypassed the U.S. in AI development. This could inform why the focus of the U.S. and NATO has been on enhancing their competitive capabilities. In theory, all three have emphasized adopting governance principles for the development and use of artificial intelligence, however, the practical manifestation of these principles has yet to be seen.

The practical implementation of global regulation of AI military applications has been lagging. Even states with advanced research and development in AI defense applications are in the early phase of regulation and have not reached their full maturity. Therefore, the present is an ideal time to develop mutually agreed principles to facilitate programmers, developers, coders as well as manufacturers, and so on in their adoption. The move towards practical adoption could start with multistakeholder discussions, workshops, conferences, and research between technologically advanced countries and technologically progressing countries with the aim to develop an agreeable framework on principles of AI governance.

Maheen Shafeeq is a researcher at Centre for Aerospace & Security Studies (CASS). She holds a master’s degree in International Relations from the University of Sheffield, UK. She can be reached at The article was first published in International Policy Digest.

Recent Publications

Browse through the list of recent publications.


At the Margalla Dialogue 2023, I took the opportunity during my remarks to posit the term “redesign” as a discursive element that should inform our engagement with society. This differs from the standard refrain of “reform” that many economists and policy pundits employ. Reform is a standard-issue term that is useful insofar as it points to reshaping or remedying existing deficiencies, and it is also a term that duly denotes criticism of existing failures.

Read More »

Starry-Eyed Youth

A recent large-scale space-related tournament known as the NASA 2023 Space Apps Challenge took place amid great fanfare around the world, and what was most exciting about the space-technology competition for me was the significant contingent of youthful teams from Pakistan. Their enthusiasm for participating in the global tournament, and for solving complex spatial challenges set out by NASA, brimmed with promise and gave a unique insight into the vast potential for scientific engagement by Pakistan’s youth.

Read More »

Balancing Pakistan’s National Security and LAWS: Need for a Pragmatic Approach

Robust advancements within conventional weapons technology and amalgamation with Artificial Intelligence (AI) and Machine Learning (ML) have led to the development of Lethal Autonomous Weapon Systems (LAWS), also known as ‘Killer Robots.’ Distinct from other conventional weapons, they integrate the element of autonomy in their critical functions. LAWS autonomously perform critical tasks such as navigation, identification, tracking, and targeting using sensors and algorithms, without human control.

Read More »

Stay Connected

Follow and Subscribe

Join Our Newsletter
And get notified everytime we publish new content.


Developed By Team CASSTT

Contact CASS

CASS (Centre for Aerospace & Security Studies), Old Airport Road, Islamabad
+92 51 5405011

All views and opinions expressed or implied are those of the authors/speakers/internal and external scholars and should not be construed as carrying the official sanction of CASS.