What are the key challenges in governing military AI? The acceleration of AI development calls for concrete initiatives to legally govern military AI, yet there is a surprising lack of action despite its recognised need for regulation.
/

How can EU regulate military artificial intelligence?

What are the key challenges in governing military AI? The acceleration of AI development calls for concrete initiatives to legally govern military AI, yet there is a surprising lack of action despite its recognised need for regulation.

430 views

Artificial Intelligence (AI) technologies are continually making global headlines through their applications, profoundly transforming language and image production. The ever-increasing capabilities of generative AI pose fundamental questions about governance. While deep fakes and chatbots have thrust issues of regulation and trust into the public spotlight, AI also plays an increasingly significant, and for many, a concerning role in the military sphere. However, there is considerably less public awareness regarding the military applications of AI.

Even though states have deliberated over autonomous weapon systems (AWS) – weapons capable of selecting and striking targets without human intervention – at the United Nations in Geneva since 2014, progress has been sluggish due to the contentious nature of many issues. A consensus exists among many states that humans need to retain control over weapon systems; however, there is disagreement about this control’s “appropriate” degree or manner. Moreover, states do not share a common viewpoint regarding whether and how the integration of AI technologies into weapon systems should be regulated or prohibited.

EU’s ambivalent stance on military AI

Legally binding international rules on AWS and AI in weapon systems remain a distant objective. It is notable, however, that national and regional legislation scarcely addresses the military applications of AI. Only a few states, including the USA and UK, have adopted non-binding guidelines. The European Union’s (EU) draft AI Act, which proposes an ambitious and comprehensive regulatory-legislative framework for AI applications, explicitly excludes the military domain. This exclusion is surprising, given that the EU is increasingly investing in the development of AI technologies for military or dual-use purposes, for example, by allocating funding through the European Defence Fund (EDF). The EDF has earmarked €8 billion for development and research projects from 2021 to 2027. Thus, the EU simultaneously seeks to regulate and promote AI technologies that could be used for military purposes.

The EU’s ambivalent stance as a hesitant regulator of military AI results in two significant consequences, both of which favour a specific type of technical, corporate expertise. Firstly, the EU’s modest attempts at establishing rules on military AI attract technical, and corporate experts to contribute their proficiency as part of advisory panels. Secondly, the EU finds itself becoming a rule-taker, as its member states utilise military applications of AI that embody design choices made by these technical, corporate experts.

Influence and expertise in the EU’s global tech panel

As a rule-maker, the EU initiated a specific ‘expert’ committee – the Global Tech Panel (GTP) – in 2018 to offer advice on AI-related issues for security and defence purposes. As Federica Mogherini, the then High Representative of the Union for Foreign Affairs and Security Policy, stated:

I have created the Global Tech Panel, a group of experts from tech companies, both big and small, and from think tanks. They are the ones with the expertise to understand (…) the challenges related to the use of artificial intelligence in the defence sector.

Federica Mogherin, Former High Representative of the European Union for Foreign Affairs and Security Policy

Instead of generating a written report, the GTP operates informally through meetings and working sessions that involve EU policymakers. Over 2018 and 2019, the GTP held a series of such meetings, including with EU Defence ministers, and was reconvened by High Representative Borrell in February 2021. On this occasion, Borrell emphasised the need for

new types of partnerships and alliances to tackle the threats of the twenty-first century. […] We must use the potential of business and civic engagement to its full extent.

Josep Borrell, High Representative of the European Union for Foreign Affairs and Security Policy

However, what kind of expertise does the GTP possess, and who is considered an expert on military AI for the EU? Most of the 14 members of the GTP are either former or current representatives of tech or financial companies. This includes members who represent or were formerly affiliated with “Big Tech” companies such as Google and Microsoft. By contrast, civil society actors from both research institutions and non-governmental organisations are underrepresented.

By appointing such members to the GTP, the EU endorses a powerful, global narrative that confines AI expertise within a narrow range of tech companies. Through their (over-)representation in advisory bodies, industry representatives associated with a handful of tech companies thus wield substantial direct influence over political-regulatory processes.

Emerging norms and informal influence in the absence of formal regulation

Secondly, we must consider what transpires in the absence of any formal, binding regulation or governance framework on AWS and other military applications of AI. In this case, rules and norms defining the “appropriate” use of AI technologies emerge through practice. The design practices of military applications of AI are particularly significant because choices at this stage determine an AI system’s capabilities and how it generates outputs. This endows the design choices made by technical and corporate experts, who develop the AI technologies that EU member states eventually utilise in the military sphere, with considerable importance. This process can position the EU as a rule-taker of rules embedded in AI technologies. While this might sound somewhat abstract, it is already evident that AI applications are not merely neutral tools. Numerous news stories reporting AI biases, also acknowledged by the EU, for instance, in facial recognition and large language models, illustrate that AI technologies carry inherent normative implications.

In the military and dual-use AI realm, the leading developers of such technologies are primarily based in the USA. Consequently, it is predominantly US-manufactured technology that EU member states utilise for military and security purposes. This is significant because the regulatory stances on the required level of human control for weapon systems integrating AI technologies, as expressed by various EU member states, do not necessarily align with those of the US. However, these differing stances may inadvertently be adopted through their encoding in the AI technologies that EU member states utilise.

In summary, the lack of formal regulation of military AI makes informal influence and the emergence of norms through informal processes more probable, as these practices serve as the only specific source of normative content. Notably, the need for regulation and prohibition appears to be widely recognised by political actors such as the EU. Yet, there is a surprising lack of action towards concrete initiatives for legally governing military AI, even as its development accelerates.

This research was conducted as part of the AutoNorms project, which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 852123

🔬🧫🧪🔍🤓👩‍🔬🦠🔭📚

Journal reference

Bode, I., & Huelss, H. (2023). Constructing expertise: the front-and back-door regulation of AI’s military applications in the European Union. Journal of European Public Policy30(7), 1230-1254. https://doi.org/10.1080/13501763.2023.2174169

Ingvild Bode is an Associate Professor at the Centre for War Studies, University of Southern Denmark. Her research focuses on processes of normative and policy change, particularly concerning the use of force. Her work has been published in reputable journals such as the European Journal of International Relations, Journal of European Public Policy, Review of International Studies, International Studies Review, and others. She serves as the Principal Investigator of the ERC-funded project AutoNorms (2020–2025), which explores the impact of autonomous weapon systems on norms.

Dr. Hendrik Huelss is an Assistant Professor at the Centre for War Studies, University of Southern Denmark, and a Senior Researcher in the European Research Council (ERC) project AutoNorms (2020-2025). Dr. Huelss' research combines an interest in norms in International Relations with perspectives on AI in politics. He is particularly focused on exploring the implications of the digital transformation for the security and military domains. His work has been published in prestigious journals such as the Journal of European Public Policy, International Political Sociology, International Theory, and Review of International Studies. Additionally, he is the co-author, alongside Ingvild Bode, of the book Autonomous Weapons Systems and International Norms (McGill-Queen's University Press).