US, NATO, and Chinese Models of AI Governance in Defence and Security

Author Name: Maheen Shafeeq      16 Jun 2022     Emerging Technologies

Over the years, there have been significant advancements in the militarisation of Artificial Intelligence (AI). It is likely (if not already being secretly done) that AI could be embedded across the spectrum of future armed conflicts ranging from offensive to defensive responses. This would enable AI to enter a stage where its applications could permit cross-domain integration, man and machine teaming, swarming, intelligence gathering, and decision- making.

At present, several types of AI applications such as autonomous drones, electronic warfare systems, target recognition and use of AI for logistics and transport have already made their way into the common military toolkit. However, unregulated Research and Development (R&D) of AI applications could jeopardise meaningful human control and oversight of the battlefield. Furthermore, as safe and trusted defence AI applications have not yet matured fully, they could have a number of ethical, moral, and legal consequences that necessitate greater critical enquiry. Therefore, to address these issues, measures have been taken by the US Department of Defense (DoD), NATO and China to implement ethical processes and guidelines in conduct of R&D and deployment of AI technologies. However, the goal of these measures have varied inclinations, and their practical applicability remains ambiguous.

In 2020, the US DoD  adopted five major areas: responsibility, equitability, traceability, reliability and governability. These series of ethical principles are for the R&D and deployment of AI on the battlefield as well as upholding ethical, legal and policy commitments. However, this standardised guideline for AI applications could also be a legal basis for criticism of other states by the US government and defence forces. Therefore, these AI ethical principles should ideally be used as one of the guidelines, along with those of NATO and China to extract an international governance framework for AI use in defence and security. This could also assist in determining global ethical standards, risks and consequences of AI applications on the battleground.

NATO’s six principles for responsible use of AI in defence and security are based on lawfulness; responsibility and accountability; explainability and traceability; reliability; governability; and bias mitigation. These principles are aimed at aligning common values and often an international commitment of allied nations to abide by responsible and effective use of AI. One aspiration behind these principles was to speed up the process of adoption of AI by increasing the key AI enablers. As NATO has been at an early stage of AI research and development, the focus appears to be on the desire to nurture an innovative AI ecosystem and reduce reliance on traditional capacity development mechanisms.

While China has not formally published ethics for governance for its military and defence AI applications, it has recently published its first position paper on regulating military applications of AI. It calls for the establishment of an ‘effective control regime’ for ‘military applications of AI’, and that countries should follow the principle of ‘AI for good’. It asserts that AI applications should be under ‘relevant human control’. However, the definition of ‘relevant human control’ remains vague. For the governance of AI, it stresses international cooperation and development of broader consensus on formulation of a ‘universal international regime’. The position paper was presented by the Chinese Arms Control Ambassador at United Nations who opposed the militarising AI to seek military superiority and undermine sovereignty and territorial integrity of other countries.

The above AI governance principles for defence and military by the US, NATO and China address different priorities. The US and NATO’s AI principles have been developed to preserve national security and strengthen international competitiveness. Whereas the Chinese AI principles primarily focus on development of AI for the well-being of humanity and assisting developing countries in strengthening AI governance.

It has been acknowledged that China has bypassed the US in AI development. This could inform why the focus of US and NATO has been on enhancing their competitive capabilities. In theory, all three have emphasised adopting governance (control/regulating) principles for the development and use of AI, however, the practical manifestation of these principles is yet to be seen.

The practical implementation of global regulation of AI military applications has been lagging. Even states with advanced R&D in AI defence applications are in the early phase, and have not reached their full maturity. Therefore, the present is an ideal time to develop mutually agreed AI principles to facilitate programmers, developers, coders as well as manufacturers in their adoption. The move towards practical adoption could start with multistakeholder discussions, workshops, conferences, and research between technologically advanced countries and technologically progressing countries with the aim to develop an agreeable framework on principles of AI governance, especially in military applications.

Maheen Shafeeq is a Researcher at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. She holds a master’s degree in International Relations from the University of Sheffield, UK. The article was previosuly published by the International Policy Digest. She can be reached at

Image Credit: Online Sources

Recent Articles