Lethal Artificial Intelligence and Autonomy

pdf
Download PDF(1MB)

RUSI convened a conference on Lethal Artificial Intelligence and Autonomy on 7 November 2018, aimed at illuminating some of the challenges that are unique to military forces in the use of AI and autonomy.

The conference was designed to stimulate a discussion about the ethics, morality and legal aspects of lethal AI and autonomous systems, but solely within the context of war and warfare. Military personnel participated in the conference, along with around 90 international delegates from the academic, government, military, non-government, charity, and industrial sectors. 

The event also sought to extend the debate beyond the usual narrative that focuses on ‘killer robots’. It is noteworthy that UK military forces already envisage a significant role for autonomous AI in decision-making, intelligence analysis, and command-and-control functions within their force structure. US, French and German doctrine signpost similar investment and development paths. Much of this Western military orthodoxy appears to be based on the presumption of an inevitability of AI and autonomy leading to a Revolution in Military Affairs (often referred to as RMA, with the emotive historical connotations that such an expression brings to bear with military audiences). Yet, since the inception of AI, wars have become more expensive and its much-predicted revolutionary effect has failed to materialise; if anything, the lack of radical change has merely emphasised thousands of years of rather slower and more considered military evolution. AI and autonomy might therefore be more about enhancing the current paradigm of war than its revolutionary overthrow. Despite this, the weight of discussion in Western military circles continues to revolve around AI and autonomy as a transformation, without evidential support.


WRITTEN BY

Professor Peter Roberts

Senior Associate Fellow

View profile


Footnotes


Explore our related content