The Risk of Not Building a Consensus on Artificial Intelligence and Defence


Accountability gap: pressure is growing for leaders to agree a global framework on the responsible use of AI in defence. Image: TSViPhoto / Adobe Stock


As AI creeps into the battlespace, the lack of a global framework on the responsible use of AI in defence is threatening to become a strategic vulnerability.

As defence leaders and thinkers convene in Munich to take stock of the geopolitical landscape and multilateralism, the conversation will inevitably gravitate toward the role of AI in our collective security. The duality of AI's promise and peril is front and centre. On one hand, the potential for AI to revolutionise operational efficiency and decision-making in defence is undeniable. On the other hand, the absence of a global framework to govern the responsible use of AI in defence applications looms large over aspirations.

There is a historical parallel: in the wake of the First World War, William Butler Yeats penned a poem that seemed to capture the essence of a world transformed by the horrors of modern warfare. Through the imagery of a falcon spiralling beyond the reach of its falconer, Yeats conveyed a haunting metaphor for humanity's loss of control over the very technologies it had birthed.

In the case of the First World War, it was the introduction of chemical weapons, automated firepower in the form of machine guns, and howitzers. This marked a departure into a new era of conflict, one characterised by an unprecedented scale of destruction enabled by technological advances. It was only after the Second World War that the international community was able to create a global framework spanning not only the UN, but also the Bretton Woods system and a series of regional military and political alliances and international agreements that formed the basis of the post-war international order.

quote
The pressing need for a multilateral consensus is not just about averting misuse but also securing our societies against adversarial uses of AI that are already impacting our national security

This moment in history serves as a poignant precursor to our current epoch, where the advent of AI – and in particular Generative AI – in defence presents us with a parallel quandary. For AI, the challenge lies in the complex environments where its effects may not be immediately apparent or easily predictable. AI’s degree of autonomy for decision-making and its dual-use nature makes it hard to determine accountability and responsibility, while increasing the complexity of regulating its proliferation.

Compliance and regulatory implications will have a profound impact on national and international trade, but there will also be a need to create a global framework for the application of AI in the defence sphere. AI will have positive impacts in a wide range of enterprise applications, but as AI creeps into the battlespace, a global framework for its military applications is needed to retain trust and legitimacy while avoiding escalations resulting from a lapse in human control.

This absence of a clear global framework on the use of AI in defence could become a strategic vulnerability. The application of AI for military purposes in the cyber domain and autonomous systems is already an emerging need given the use of adversarial AI, and so it is important to put in place governance and frameworks to reduce some of the potential risks. National AI transformation will be accelerated by strategic power dynamics and governments taking actions driven by national interests and the urgency of building sovereign AI solutions that are secure and resilient. The pressing need for a multilateral consensus on AI is not just about averting misuse but also securing our societies against adversarial uses of AI that are already impacting our national security.

Just as critical as these operational considerations is the way that AI may reshuffle alliances as countries start to build an AI industrial base to create and maintain the operational advantages that AI can deliver. Generative AI is foundational to economic and national security, and emerging incentives will drive the creation of a new AI industrial base – from semiconductors to models and hardware – creating new national players and new capital allocations and investment that will reshuffle the current defence industrial base. For instance, governments already have significant and sensitive data holdings, and will leverage these to build their own large language models and work with industry on sovereign or hybrid solutions that enable a new frontier of value creation. Ultimately, the competition will be fuelled by fundamental capabilities: skill, capital, infrastructure, data and software.

quote
The lessons of history underscore the need for a balanced approach, one that embraces the promise of AI while instituting robust mechanisms to mitigate its risks

This also means a new focus on being able to deploy AI tools and applications in such a way that their decisions can be traced back to their origins for reliability and accountability purposes. This is an imperative that goes beyond the strategic and operational aspect of responsible AI, necessitating a new layer of authentication and governance of AI models introduced into the military enterprise. From the chip to the application, there needs to be traceability and integrity.

In this era of rapid AI advancement, the challenge is to navigate the fine line between harnessing AI's potential for the greater good and ensuring it does not spiral out of our control. The lessons of history underscore the need for a balanced approach, one that embraces the promise of AI while instituting robust mechanisms to mitigate its risks. As we stand at this crossroads, the imperative of forging a global consensus on the responsible use of AI in defence cannot be overstated. It is a task that demands our immediate attention, lest we find ourselves facing the consequences of losing control over the very technology that offers so much promise.

The views expressed in this Commentary are the author’s, and do not represent those of RUSI or any other institution.

Have an idea for a Commentary you’d like to write for us? Send a short pitch to commentaries@rusi.org and we’ll get back to you if it fits into our research interests. Full guidelines for contributors can be found here.


WRITTEN BY

Bryan Rich

View profile


Footnotes


Explore our related content