AI Maheen Article Thematic Web Image

Share this article

Facebook
Twitter
LinkedIn

Over the years, there have been significant advancements in the militarization of artificial intelligence. To a limited extent we are already witnessing these innovations on the battlefield in Ukraine and elsewhere.

Several types of AI applications such as autonomous drones, electronic warfare systems, target recognition, and the use of AI for logistics and transport have already made their way into modern militaries. However, unregulated research AI applications could jeopardize meaningful human control and oversight of the battlefield. AI applications have not yet fully matured and could have a number of ethical, moral, and legal consequences that necessitate greater oversight. To address these issues, measures have been taken by the United States, NATO, and China to implement ethical processes and guidelines in the conduct and deployment of AI technologies.

In 2020, the Pentagon adopted five ethical principles for AI that cover responsibility, equitability, traceability, reliability, and governability. These series of ethical principles are for the deployment of AI for both combat and non-combat functions. Additionally, these principles assist and guide the U.S. military in upholding ethical, legal, and policy commitments in the domain of AI. These AI principles aim to ensure U.S. leadership in AI for years to come.

In 2021, NATO released six principles for the use of AI in defense and security that are based on lawfulness, accountability, traceability, reliability, governability, and bias mitigation. These principles are aimed at aligning common values and an international commitment of allied nations to abide by international law and ensure interoperability. The formal adoption of an AI strategy will ensure the necessary collaboration between transatlantic allies to meet defense and security challenges. As NATO has been at an early stage of the research and development of AI, the focus appears to be on the desire to nurture an innovative AI ecosystem and reduce reliance on traditional capacity development mechanisms.

While China has not formally published ethics for artificial intelligence, it has published its first position paper on regulating military applications of artificial intelligence. It calls for the establishment of an “effective control regime” for “military applications of AI,” and that countries should follow the principle of “AI for good.” It asserts that AI applications should be under “relevant human control.” However, the definition of “relevant human control” remains vague. For the governance of artificial intelligence, it stresses international cooperation and the development of a broader consensus on the formulation of a “universal international regime.”

The above governance principles of the U.S., NATO, and China address different priorities. U.S. and NATO principles have been developed for cooperation among allies while strengthening international competitiveness. Whereas China primarily focuses on the development of artificial intelligence for assisting developing countries in strengthening their governance.

It has been acknowledged that China has bypassed the U.S. in AI development. This could inform why the focus of the U.S. and NATO has been on enhancing their competitive capabilities. In theory, all three have emphasized adopting governance principles for the development and use of artificial intelligence, however, the practical manifestation of these principles has yet to be seen.

The practical implementation of global regulation of AI military applications has been lagging. Even states with advanced research and development in AI defense applications are in the early phase of regulation and have not reached their full maturity. Therefore, the present is an ideal time to develop mutually agreed principles to facilitate programmers, developers, coders as well as manufacturers, and so on in their adoption. The move towards practical adoption could start with multistakeholder discussions, workshops, conferences, and research between technologically advanced countries and technologically progressing countries with the aim to develop an agreeable framework on principles of AI governance.

Maheen Shafeeq is a researcher at Centre for Aerospace & Security Studies (CASS). She holds a master’s degree in International Relations from the University of Sheffield, UK. She can be reached at cass.thinkers@gmail.com. The article was first published in International Policy Digest.

Recent Publications

Browse through the list of recent publications.

Beyond Autopilot: Unmanned Systems and their Dual Use Potential

Unmanned systems, driven by rapid technological advancements, have become pivotal not only in the military domain but in civilian arenas also because of their dual nature. So far, most of the available literature looks at the military utility of unmanned systems only. However, this article sheds light on the usefulness of

12 views

Read More »

How Vulnerable is the Aviation Industry to Cybersecurity Risks?

The aviation industry has embraced widespread digitalisation over the past decade to improve passenger experience and operational efficiency. However, according to the UN’s International Civil Aviation Organisation (ICAO), this advancement has exposed the industry to increasing cyber-attacks. Notably, a major cyber-attack against a global IT supplier in February 2021 underscored

33 views

Read More »

Daniel McDowell, Bucking the Buck

Daniel McDowell’s book Bucking the Buck: US Financial Sanctions and the International Backlash against the Dollar is a notable addition to the literature on the de-dollarisation trend and its underlying motives. In it, McDowell critically analyses the influence of the overuse of economic sanctions by the United States (US)

44 views

Read More »

Stay Connected

Follow and Subscribe

Join Our Newsletter
And get notified everytime we publish new content.

© 2022 CASSTT ALL RIGHTS RESERVED

Developed By Team CASSTT

Contact CASS

CASS (Centre for Aerospace & Security Studies), Old Airport Road, Islamabad
+92 51 5405011
cass.thinkers@casstt.com
career@casstt.com

All views and opinions expressed or implied are those of the authors/speakers/internal and external scholars and should not be construed as carrying the official sanction of CASS.