The imagination of military aviation enthusiasts has remained captivated by the idea of self-flying fighter aircraft for decades. Hollywood tried to cash in on this back in 2005 with the release of Stealth which depicted an AI-controlled Unmanned Combat Aerial Vehicle (UCAV) slated to replace human pilots in the future. However, developing any absolutely autonomous aircraft has remained a distant dream. Technologically advanced states are now accelerating research and development to finally turn this dream into a reality.
In this regard, the United States (US) has been at the forefront of trying to realise the transformative potential of AI in military aviation, spearheaded by the collaboration between the US Air Force Research Laboratory (AFRL) and Defense Advanced Research Projects Agency (DARPA). While their efforts at developing autonomous air platforms were ongoing, Top Gun: Maverick hit the cinemas in May 2022. The opening sequence had the following quote, ‘These planes you’ve been testing, one day, sooner than later, they won’t need pilots at all.’ Few movie-goers would have thought that by December, a heavily modified F-16 fighter jet outfitted with two AI agents by AFRL and DARPA would take to the skies for the first time. The AI agents continued to undergo significant enhancements in trials throughout 2023. Interestingly, it took more than 100,000 software improvements before they were deemed capable of taking on a human pilot last September in the first real-world dogfight between man and machine.
Flash-forward to May 2024, USAF Secretary Frank Kendall made international headlines for flying in the same AI-controlled jet. Surprisingly, the AI agents went toe to toe with an experienced human pilot which prompted Secretary Kendall to candidly draw comparisons between AI and humans by pointing out the latter’s inherent limitations. Perhaps the biggest takeaway from this flight was a live demonstration of the pace at which the US has been able to refine its AI agents in under a year. But what caught the media’s attention was how he alluded to the eventuality that automation in aviation would replace humans.
The conversation around whether human pilots should remain in the cockpit in future conflicts is part of the larger international debate concerning lethal autonomy centred on whether humans should remain ‘in the loop’. While the US has given repeated assurances that the integration of AI in the military would uphold safety and ethical norms, it remains to be seen whether they make good on these assurances. Even Secretary Kendall acknowledged the dilemma of being disadvantaged if restrictions were imposed on absolute autonomy in a potential conflict. This could have been a veiled reference to China’s blazing advances in developing AI-enabled military systems.
It is noteworthy, however, that ‘autonomy’ denotes a spectrum of independent control and the challenges of employing absolutely autonomous air combat platforms were highlighted even in Stealth, where the futuristic AI-controlled UCAV disobeyed orders of the commanding pilot and got another pilot killed. The scene highlighted that operational concerns over lethal autonomy have existed since the idea of autonomous flight started taking flight in popular imagination. Understandably, it then made sense why USAF officials were quick to dismiss claims of AI going rogue in air combat simulations last year. The incident underscored that such concerns were indeed not confined only to the realm of fiction considering the unpredictable ‘blackbox’ nature of AI.
For now, the silver lining is that prominent US officials have frequently underlined that all current R&D efforts by DARPA and AFRL in aviation AI are aimed at augmenting human pilots, not replacing them. Their expectation is that a pilot with 100 flight hours could artificially come at par with an officer having ten times as much experience. Moreover, they try to assuage ethical concerns by stressing that their overarching objective is to improve manned-unmanned teaming (MUM-T).
Relatedly, there is an old adage in the military that trust is gained in teaspoons and lost in buckets. As such, pursuing human pilots to relinquish control to an AI agent would be the key challenge for effective collaboration. For instance, it took pilots years to get accustomed to even the Auto Ground Collision Avoidance System (GCAS) which averted possible collisions and saved lives. This could explain the US’ insistence that fostering trust between the human pilot and the AI agents is the core objective of all ongoing AI programmes. It is intriguing to observe that, despite being at the forefront of technological innovation, US scientists are deeply focused on enhancing AI with trustworthiness – a fundamental human trait. This emphasis underscores the recognition that technological advancement alone is insufficient without ensuring ethical considerations and reliability in AI systems
This fact lends credence to the notion that in the future of military aviation, air combat will remain both a science and an art. While AI is likely to excel in the science aspect, leveraging its computational processing to undertake real-time analysis of data; it is human pilots who have mastered the art of air combat by having the cognitive flexibility to respond to unforeseen circumstanced scenarios while AI’s decision-making is heavily constrained by a limited dataset. Therefore, the future of military aviation rests on effectively leveraging the strengths of both man and machine.
Mustafa Bilal is a Research Assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad. He can be reached at cass.thinkers@casstt.com