AI and Cyber: Could the War of the Robots be the Next War in the Wires?
Fear of AI-driven cyber threats is growing, but history shows exaggeration can lead to misguided policy. A measured approach is crucial to avoid unnecessary escalation.
Fears around AI have begun to change cyber security. The influence of uncertainty and anxiety on security strategy only stands to gain momentum. With the recent release of Chinese AI model DeepSeek, Vatican warnings about ‘the shadow of evil’, and the lack of consensus on appropriate uses for AI in conflict, public concerns will only grow. However, none of these threats are nearly as powerful as the fear associated with them. The premature securitisation of AI could even lead to unnecessary escalation, especially if deterrence is perceived as the preferred strategy, a mistake already made with cyber.
We’ve seen all this before. AI is following a threat perception and escalation path trodden by cyber war, with premature securitisation following fear of the unknown and an extrapolation of threats dislocated from empirically verifiable evidence and experience. As with cyber war, AI isn’t the cyber security threat – hyperbole is. And if we don’t manage AI-related cyber security fears now, they could grow out of control quickly.Â
Exaggerating the AI threat to cyber security adds another perceived security problem that competes for attention and resources with the real problems that affect people today, including the potential assertiveness of Russia beyond Ukraine, food security, natural disasters, climate change, and disinformation. The best policy intervention now would be to examine the tangible scenarios and attendant effects regarding AI and cyber security, particularly with the cyber war trajectory as a reference point. The relative de-securitisation of AI within cyber security may be exactly what the cyber domain needs.
The Dangers of Premature Securitisation
Alarmism over AI’s potential impact on cyber security began early and grew quickly, and hyperbole’s firm grip seems unlikely to loosen. The Center for AI Safety has called the potential misuse of AI a ’risk of extinction’, with Carnegie Europe cyber fellow Raluca Csernatoni declaring it a military ’gold rush’ and a ‘double-edged sword for national security’. NATO has expressed concerns about the weaponisation of AI as a tool of ‘adversarial interference’. And of course, no emerging threat is complete without the mention of an ‘arms race’.
Concerns over an extinction event in AI show that the tendency to fear the unknown even in the face of analysis and evidence to the contrary is alive and well
The parallels with cyber war are eerily familiar.Â
In 1993, while the commercial internet still sought to find its footing, Rand analysts John Arquillo and David Ronfeldt claimed that the ‘[c]yberwar is [c]oming!’ Even with the thin penetration of internet technologies at the time, it was all too easy to extrapolate to a ‘war in the wires’ that could lead to virtual ‘bomb craters and war ruins’. This paradigm persisted for nearly 20 years, until Thomas Rid explained that ‘cyber war has never happened in the past, that cyber war does not take place in the present, and that it is unlikely that cyber war will occur in the future’. Erick Gartzke reinforced the point, adding that if a cyber war did happen, it would only break out if ‘a foreign power has decided it can stand toe-to-toe with conventional U.S. military power’.
Despite these findings, the popular fear of cyber war persists and continues to grow. In the same announcement that raised the threat of an AI arms race, for example, the UK government declared, ‘Cyber war is now a daily reality’. Munich Re, one of the largest cyber insurers in the world, claims that cyber-attacks have become ‘a very essential and very effective part of warfare’. Such claims overlook the ‘disappointing’ cyber warfare results Russia has achieved throughout its recent war with Ukraine, the known ineffectiveness of cyber weapons, and the challenges of integrating cyber capabilities into combined arms operations.Â
The threat is created and amplified through discourse, which is not a new problem in cyber security, even when there is plenty of analysis and empirical evidence to the contrary. Frankly, we’ve seen with cyber war that empirical evidence doesn’t matter when people believe that a ‘big one’ is right around the corner. And concerns over an extinction event in AI show that the tendency to fear the unknown even in the face of analysis and evidence to the contrary is alive and well.Â
History is Rhyming Right Now
US author Mark Twain may have said, ‘History may not repeat itself. But it rhymes’. And the fears of AI within cyber security demonstrate this point. The rapid onset of AI anxiety in cyber security closely follows the pattern shown with the hyperbolised fear of cyber war discussed above. Like cyber war, we’ve likely allowed (and encouraged) AI anxiety to grow too big, too fast. In fact, there are signs that analytical rigour is missing in the treatment of AI and cyber security.
Conditional language abounds with regard to the effects of AI on cyber security. Words like ‘could’, ’may’, and ‘possibility’ feature in discussions about the weaponisation of AI and its cyber security implications, much as they have throughout the securitisation of cyber war. Leaning on conditional language implies that there hasn’t been enough experience to get past the need to extrapolate. When empirical evidence is available, conditional language isn’t as necessary, and extrapolation has a lighter load to haul.
Although there is plenty of reason to better understand the threat posed by AI within cyber security, the rush to forecast extinction events overlooks mitigating factors, including the defensive use of AI to counteract AI-enabled attacks. Some even see a shift toward a defensive advantage.Â
Steven Rehn, the US Army Cyber Command chief technology officer, explains that while ‘the old adage is the advantage goes to the attacker’, the situation has changed significantly, because ‘with AI and machine learning, it starts to shift that paradigm to giving an advantage back over to the defender’.Â
Embracing the fear of AI-driven extinction simply distracts cyber security professionals from the tangible threats of today
Yet, characterising the AI battle as favouring offensive or defensive actors may be past its time. Cyber security scholar Jenny Jun proposes that we should instead ‘focus on the mediating factors and incentives that drive actors to develop, use, and apply AI’.
Policy Considerations
Fears of supercharged capabilities are running rampant and show no signs of slowing down. The strength of imagination can be difficult to counter, despite the fact that the ‘AI threat’, such as it is, could itself be neutralised through the use of AI-driven defensive capabilities. It’s worth remembering the observation of CSIS cyber security researcher James Lewis: ‘As a trope, a cyber catastrophe captures our imagination, but as analysis, it remains entirely imaginary and is of dubious value as a basis for policymaking.’
The likelihood that AI would somehow change that balance is slim. Further, the prospect that AI fears would be fuelled by similarly imaginable but unrealistic scenarios has risen. Cyber security’s staccato historical rhymes offer a sense of what’s to come.Â
For this reason, early policy intervention is crucial – and that specifically means stopping to take a breath. Once the escalation of fear begins, it may take decades to unwind, as we’re starting to see with unfounded fears of cyber war and catastrophe. We need to separate imagination from analysis and plot a more disciplined course forward. Embracing the fear of AI-driven extinction simply distracts cyber security professionals from the tangible threats of today.
© Tom Johansmeyer, 2025, published by RUSI with permission of the author
The views expressed in this Commentary are the author’s, and do not represent those of RUSI or any other institution.
For terms of use, see Website Ts&Cs of Use.
Have an idea for a Commentary you’d like to write for us? Send a short pitch to commentaries@rusi.org and we’ll get back to you if it fits into our research interests. Full guidelines for contributors can be found here.
WRITTEN BY
Tom Johansmeyer
- Jim McLeanMedia Relations Manager+44 (0)7917 373 069JimMc@rusi.org