Over the years, there have been significant advancements in the militarization of artificial intelligence. To a limited extent we are already witnessing these innovations on the battlefield in Ukraine and elsewhere.
Several types of AI applications such as autonomous drones, electronic warfare systems, target recognition, and the use of AI for logistics and transport have already made their way into modern militaries. However, unregulated research AI applications could jeopardize meaningful human control and oversight of the battlefield. AI applications have not yet fully matured and could have a number of ethical, moral, and legal consequences that necessitate greater oversight. To address these issues, measures have been taken by the United States, NATO, and China to implement ethical processes and guidelines in the conduct and deployment of AI technologies.
In 2020, the Pentagon adopted five ethical principles for AI that cover responsibility, equitability, traceability, reliability, and governability. These series of ethical principles are for the deployment of AI for both combat and non-combat functions. Additionally, these principles assist and guide the U.S. military in upholding ethical, legal, and policy commitments in the domain of AI. These AI principles aim to ensure U.S. leadership in AI for years to come.
In 2021, NATO released six principles for the use of AI in defense and security that are based on lawfulness, accountability, traceability, reliability, governability, and bias mitigation. These principles are aimed at aligning common values and an international commitment of allied nations to abide by international law and ensure interoperability. The formal adoption of an AI strategy will ensure the necessary collaboration between transatlantic allies to meet defense and security challenges. As NATO has been at an early stage of the research and development of AI, the focus appears to be on the desire to nurture an innovative AI ecosystem and reduce reliance on traditional capacity development mechanisms.
While China has not formally published ethics for artificial intelligence, it has published its first position paper on regulating military applications of artificial intelligence. It calls for the establishment of an “effective control regime” for “military applications of AI,” and that countries should follow the principle of “AI for good.” It asserts that AI applications should be under “relevant human control.” However, the definition of “relevant human control” remains vague. For the governance of artificial intelligence, it stresses international cooperation and the development of a broader consensus on the formulation of a “universal international regime.”
The above governance principles of the U.S., NATO, and China address different priorities. U.S. and NATO principles have been developed for cooperation among allies while strengthening international competitiveness. Whereas China primarily focuses on the development of artificial intelligence for assisting developing countries in strengthening their governance.
It has been acknowledged that China has bypassed the U.S. in AI development. This could inform why the focus of the U.S. and NATO has been on enhancing their competitive capabilities. In theory, all three have emphasized adopting governance principles for the development and use of artificial intelligence, however, the practical manifestation of these principles has yet to be seen.
The practical implementation of global regulation of AI military applications has been lagging. Even states with advanced research and development in AI defense applications are in the early phase of regulation and have not reached their full maturity. Therefore, the present is an ideal time to develop mutually agreed principles to facilitate programmers, developers, coders as well as manufacturers, and so on in their adoption. The move towards practical adoption could start with multistakeholder discussions, workshops, conferences, and research between technologically advanced countries and technologically progressing countries with the aim to develop an agreeable framework on principles of AI governance.
Maheen Shafeeq is a researcher at Centre for Aerospace & Security Studies (CASS). She holds a master’s degree in International Relations from the University of Sheffield, UK. She can be reached at email@example.com. The article was first published in International Policy Digest.