Human-Machine Teaming’s Shared Cognition Changes How War is Made

Number crunching: using AI to support decision-making is likely to privilege quantifiable measures rather than qualitative ones

Number crunching: using AI to support decision-making is likely to privilege quantifiable measures rather than qualitative ones. Image: Xchip / Adobe Stock


Human-machine teaming rests on the idea of a shared cognition, even if the machine’s thinking is not at all like humans. Warfare has always only been the sole concern of humans, but no more.

A key attribute of both humans and AI is cognition, the processes involved in thinking. The form of this cognition varies significantly between both and is generally viewed as complementary. The current AI-enabled human-machine teaming construct, broadly speaking, then, involves a shared cognition or at least a process of distributed, collective thinking. 

For many defence forces, decision support systems (DSS) are seen as an important initial AI application. The current wars in Ukraine and Gaza appear to bear this out. While there are concerns over whether the humans or the machine are in charge in such human-machine teams, the interaction between the two might be the more important issue. 

Making interaction the main focus implies this could be purposefully shaped to give more or less weight to either the AI, the humans or their combination. This weighting might be influenced by the decision-making purpose and context, and the level of autonomy allocated to the AI or human part of the human-machine team. This is a choice that designers of human-machine teams both can and need to make. 

Inherent in using AI to support making high-level choices is the perception of decision-making more as a science than an art. This then privileges the importance of data analytics that systematically convert the available data into predictions and insights, and automate organisational workflows and outputs. Decisions can accordingly become expressed in terms of data and favouring quantifiable measures rather than qualitative ones. An organisation’s traditional human-only operating model changes as AI joins in. 

Changing the Nature of War

Clausewitz stressed that war involved ‘the collision of two living forces’ and was ‘composed of primordial violence, hatred, and enmity, which are to be regarded as a blind natural force; of the play of chance and probability within which the creative spirit is free to roam; and of its element of subordination, as an instrument of policy…’ For many, this trinity defines the nature of war and is immutable.

These three aspects are profoundly and exclusively human. AI is not passionate, creative or politically minded. Machine intelligence is quite different. Machine learning, today’s leading AI paradigm, is focused on task optimisation while unconcerned about what the task is. Most agree with Clausewitz that war is ‘an act of force to compel our enemy to do our will’. However, AI does not have and arguably cannot understand human will, let alone comprehend how to influence it. 

quote
Inherent in using AI to support making high-level choices is the perception of decision-making more as a science than an art

As human-machine teams involve a shared cognition, their agency is the sum of human and artificial intelligence. Consequently, by design, human-machine teams will wage war using a collective intelligence that is different to that of humans alone. 

AI does not think like humans, reasons differently, has dissimilar logic flows and possesses unusual rationalities. In the business of making war, human-machine teams are consequently truly new actors. This entails a step change in the nature of war as currently understood and shifts it away from being a ‘blind natural force’, infused by a ‘creative spirit’ free to roam and subservient to human politics. As the human-only operating model of traditional military forces begins to incorporate AI, warfighting decisions will start to incorporate quantifiable data rather than only qualitative judgements. 

Changing the Character of War

The shift in the nature of war may cause corresponding shifts in the character of war, in particular with AI influencing organisational operating models. For simplicity, the operational level of warfare is seen here as comprising only manoeuvre and positional operating models.

Manoeuvre aims to create panic, a cognitive paralysis that leads to a collapse in the adversary’s will to resist. Basil Liddell-Hart argued that manoeuvre warfare required cognitively surprising enemy commanders; however, he cautioned that: ‘Surprise lies in the psychological sphere and depends on a calculation, far more difficult than in the physical sphere, of the manifold conditions varying in each case, which are likely to affect the will of the opponent.’ Such a calculation would be a qualitative assessment and could only reasonably be done by humans; it would not be something that AI would understand or be able to quantify. 

As the underlying premises of manoeuvre warfare are inexplicable to machines, human-machine teams are instead more likely to stress positional warfare. In positional warfare, military formations are placed in a location that compels the enemy to attack but which is favourable to the defence. Positional warfare aims to impose high attrition on the attacking enemy forces, progressively destroying an adversary’s equipment, personnel and resources at a pace greater than these can be replenished. 

Subscribe to the RUSI Newsletter

Get a weekly round-up of the latest commentary and research straight into your inbox.

Such attrition of material and people can be understood using quantitative measures and is accordingly able to be modelled and considered by AI. Given this, the machine element of human-machine teams will inherently favour positional warfare. This tendency suggests that when the opposing sides on a battlefield both employ AI-enabled DSSs, a certain stagnation may set in as the human-machine teams involved unconsciously favour fighting positional battles. 

It is conceivable, but unprovable, that the shift to positional warfare in the Ukraine war in 2023 was partly due to the introduction of AI-enabled DSSs and their influence on strategic and operational thinking. In that year, the Delta DSS became increasingly widely used in the Ukrainian Armed Forces. 

Delta combines information from many diverse sources, sharply improving battlefield situational awareness and allowing the waging of data-based warfare at a speed and precision that other armed forces without such a DSS would have difficulty matching. Nico Lange observes that for the Ukrainian Armed Forces: ‘Artificial neural networks for rapid pattern recognition in complex data and machine learning have become a permanent and integral part of warfare.’ 

The Russian advantage in mass has to some extent been negated through using AI-enabled DSS. In fighting north of Kyiv, Ukrainian forces were outnumbered 12 to one but defeated a Russian offensive. The defence now appears to again be becoming dominant in the defence-offence balance. The particular strengths of the machine part of the Delta human-machine team are arguably influencing Ukrainian thinking about combat operations, albeit at some risk of recreating the static, high-attrition warfare of the First World War. 

The Israel Defense Forces’ (IDF) use of the Lavender DSS in Gaza may also be indicative. Lavender was apparently assessed by the IDF as more accurate and much faster than humans in targeting individuals believed to be associated with Hamas. It is perhaps unsurprising that the IDF appears to have taken advantage of this and fashioned its way of war accordingly.

From early on, the easiest individuals to quickly locate and attack were the foot soldiers rather than the senior commanders. The easier targets to find quickly came to dominate the attack listings rather than the more strategically important targets. The IDF attacked 15,000 targets in the first 35 days of its offensive – a very high number compared to previous major IDF military operations in Gaza. 

quote
As the human-only operating model of traditional military forces begins to incorporate AI, warfighting decisions will start to incorporate quantifiable data rather than only qualitative judgements

Humans were in the loop approving each target that Lavender determined but quickly grew to trust Lavender, often approving target recommendations after only 20 seconds’ consideration. The IDF personnel involved reportedly treated the DSS’s outputs as equivalent to a human’s and granted it an implicit decision-making authority. 

In so doing, the human partners of the human-Lavender teams appeared to display automation bias and action bias. Indeed, it could be said that Lavender was encouraging and amplifying these biases. In a way, the humans engaged in cognitive offloading to the machine. 

The human-machine team’s shared cognition balance became weighted towards the machine. This shift may then have helped push the IDF towards an attrition style of combat, which after more than 14 months was still slowly grinding Hamas down. This slow battlefield progress against a relatively inferior adversary raises the question of whether a better outcome would have been achieved if more weight had been accorded to human intelligence in the IDF’s human-machine team’s shared cognition decision-making.

Conclusion

The choice of the decision-making architecture in a human-machine team is a deeply consequential one. The choice architecture should be deliberately structured, conscious of the human-machine decision-making context, implications and parameters. If not, the machine part’s understanding of war in terms of quantitative data may come to dominate the human element’s thinking, unintentionally influencing strategies and ways of war, perhaps adversely. 

Human-machine teaming involves a collective thinking about military problems by two entities whose cognition is fundamentally different. Making war has previously only been the sole concern of humans, but future decision-making will now involve another. This is a real change.

© Peter Layton, 2025, published by RUSI with permission of the author

The views expressed in this Commentary are the author’s, and do not represent those of RUSI or any other institution.

For terms of use, see Website Ts&Cs of Use.

Have an idea for a Commentary you’d like to write for us? Send a short pitch to commentaries@rusi.org and we’ll get back to you if it fits into our research interests. Full guidelines for contributors can be found here.


WRITTEN BY

Dr Peter Layton

Associate Fellow

View profile


Footnotes


Explore our related content