You are here

Avoiding the Reign of Artificial Stupidity

Jack Watling
RUSI Defence Systems, 27 October 2020
Martial Power Programme, Military Sciences
Artificial intelligence in war may be a good servant, but it is likely to prove a terrible master, for while the character of stupidity may evolve, its nature remains immutable

Almost 3500 years ago, Pharaoh Thutmose III of Egypt was seeking to defeat a rebellion emanating from the fortress of Megiddo. There were three viable approaches to the fortress: two long but safe routes, and one perilous traverse of a ravine. The latter was shorter, but if the enemy caught Thutmose’s army moving through it, he would face catastrophic defeat. Thutmose was advised to take one of the long routes. He chose the ravine, and in doing so surprised his enemy, who had to carry out a tiring and hasty withdrawal to reposition and confront the Egyptian army. In the subsequent battle the Pharaoh’s army triumphed.

Any cold analysis of data would likely have concluded that the ravine was the wrong choice. Uncertain of the position of the enemy, it risked total defeat, and in return offered an opportunity for a far from decisive advantage. In fact, it was precisely because traversing the ravine was an unwise move that it succeeded. The rebels had left it unguarded. It may be tempting to write off Thutmose as lucky, but history is replete with commanders who routinely selected sub-optimal courses of action (COAs) that nevertheless proved highly successful. Often this was because to the adversary these choices did not make sense, and so caught them off-guard. It could be said that this is what the British Army aspires to do with its manoeuverist approach.

A long history of sub-optimal COAs leading to success on the battlefield is difficult to reconcile with those who argue that command should be increasingly invested in artificial intelligence (AI). Computers determine positive outcomes by reference to criteria set by their designers. These criteria must be discrete and measurable. Moreover, these criteria are not malleable or contextual, but rigid. Computers seek optimal solutions to such problems. Not only is it highly unlikely that AI will generate politically attuned and dynamic measures of success in the foreseeable future, but this is also likely undesirable because it would hamper understanding of what the AI was trying to achieve, and therefore trust in its judgement.

Continue Reading

Become A Member

To access the full text of this article and many other benefits, become a RUSI member.

Author

Dr Jack Watling
Research Fellow, Land Warfare

Dr Jack Watling is Research Fellow for Land Warfare. Jack has recently conducted studies of deterrence against Russia, force... read more

Support Rusi Research

Subscribe to our Newsletter