Given the blistering pace of technological change, we need to look beyond AI’s immediate practical applications and start exploring deeper questions about how it is likely to alter relationships between and within societies.
Recently, Google launched Gemini, its new ‘language model’ which is able to produce realistic prose and creative images. A few days later, however, it removed Gemini’s image creation feature, embarrassed by its reluctance to draw white men in historically typical roles – as Nazis and popes, most notoriously. Instead, Gemini drew them as people of colour in what was seen as an attempt to avoid the bias for which previous generative AI models have been criticised. An embarrassing episode, for Google this exemplifies the sometimes-awkward relationship between such technologies and the ways societies interact with them.
Google’s retreat is only a temporary bump in the road to ever more capable AI. There’s a sense of tremendous change afoot. Every week brings new breakthroughs and new controversies. In all areas of life, there’s great excitement and uncertainty about what AI will mean. Share prices in NVIDIA, makers of the chips at the heart of AI, have surged so much that it is now the third biggest company in the world, lagging only Apple and Microsoft – both established technology titans that are now pivoting rapidly towards AI.
The same dynamics – rapid, mutually interacting technological and social changes – are playing out in conflicts, too. The ubiquitous battlefield drones in Ukraine are a harbinger of what is coming. Behind the scenes, algorithms work to find and cue targets. A senior US officer talks about ‘millions’ of cheap swarming drones – part of a programme the Pentagon has launched to accelerate procurement of cutting-edge tech.
While these developments are dramatic, we are arguably only scratching the surface of AI’s implications for conflict and, more broadly, geopolitics. We are discussing platforms and tactics when far more profound changes are already underway. We have barely started to think about these outside the realm of sci-fi and the excited chatter of technophiles.
What will it mean for strategy when AI is able to make insightful deductions about the minds of adversaries? Language models already demonstrate uncanny ‘theory of mind’ abilities. What about propaganda, when AI produces creative artificial content that is indistinguishable from – or perhaps even better than – the best human minds in Hollywood and Tin Pan Alley? What happens when scientific discoveries are made by intelligent machines – new materials, new medicines and – yes – new weapons?
We are arguably only scratching the surface of AI's implications for conflict and, more broadly, geopolitics
It's certainly important to discuss the ethics of ‘killer robots’ – automated weapons that can pick and prosecute their targets without human involvement. There is an abundance of analysis on such matters. And we are surely right to invest effort in building norms around the role of AI in conflict, as is being done by the new Global Commission on responsible military AI.
But we also need to explore deeper questions about society, technology and geopolitics – and given the blistering pace of technological change, we need to start now. We know that earlier technologies have altered relationships between and within societies, opening the way to new individual and shared identities; reshaping possibilities for government; and recrafting the relationships between states. Even if it went no deeper than these earlier technologies, AI would still have a sizeable impact. This time though, the changes are likely to be even more profound. Yes, AI is a ‘general-purpose’ technology – a bit like electricity. Yes, it is an information-processing technology – rather like the computer. Far more importantly, though, it is a decision-making technology. For the first time in our history, non-human intelligence will be shaping the destiny of humans.
Among the pressing geopolitical questions are: who can create the best technology, and who can develop practical applications for it? Does AI favour certain types of society – open or authoritarian? Or are such descriptions even the best way of thinking about the new relationships that will emerge? Will more capable AI make conflict more or less likely? So much remains unsettled.
There are still naysayers who argue that AI is just another technology; that we’re living in a hype bubble; or that machines are not really ‘intelligent’ in the way that we are. After all, is battery power not a limiting factor? Is war – as events in Ukraine powerfully demonstrate – not fundamentally still about artillery? But too often the most cynical pundits have a sly tendency to move the goalposts with each advance in AI’s abilities – or, just as bad, they are reluctant to put the hours into reading about AI and adjacent fields. It is so much easier to make bold declarations that not much is new under the sun.
Even if AI did not advance much technically beyond where it is now, there is still plenty of disruption ahead, including in geopolitics. Google DeepMind recently published the efforts of an AI which had discovered millions of wholly new materials – just one more example of the ways in which machine learning is powering innovation in scientific research. What will the geopolitics of a world where machine-generated ideas create economic value, health benefits and military power look like? We are about to find out.
For more on RUSI’s work on AI and geopolitics, see here.
The views expressed in this Commentary are the author’s, and do not represent those of RUSI or any other institution.
Have an idea for a Commentary you’d like to write for us? Send a short pitch to commentaries@rusi.org and we’ll get back to you if it fits into our research interests. Full guidelines for contributors can be found here.
WRITTEN BY
Professor Kenneth Payne
- Jack BellMedia Relations Manager+44 (0)7917 373 069JackB@rusi.org