Unmanned Aerial Vehicles have prompted widespread criticism, despite having proven their value in a variety of contexts. In many cases, their use may be the most logical - and safest - option.
By Elizabeth Quintana for RUSI.org
The Ministry of Defence's recent report, published on 30 March by the Development, Concepts and Doctrine Centre (DCDC) and entitled 'The UK Approach to Unmanned Aircraft Systems', is to be welcomed and should prompt debate both in the UK and amongst allies about the acceptable use of unmanned systems by the military. Indeed, vigorous debates are already underway in France, Germany and the US over the conduct of a 'remote war', focusing in particular on the use of unmanned and cyber weaponry. However, looking through the outlandish headlines predicting 'death by terminator', it is clear that the ethics associated with employing such systems are dynamic, proving the adage that it is not what you use that counts, but the way that you use it.
A Risk, But Not Necessarily
Unmanned Aerial Vehicles (UAVs) lower the bar to war. Conceptually, this is a serious risk, mainly because it is easier for politicians to send an unmanned asset, be it an aircraft or a cruise missile, to cross a nation's border than it is to deploy a manned asset. There is less political risk, at least from the voters: no pilot who could potentially die on the mission and subsequently be paraded across the world press. It is therefore important that the correct policy measures are put in place to avoid this eventuality. On the other hand, UAVs were originally designed for 'dull, dirty or dangerous' missions, whereas - in fact - the dangerous is rather difficult for the larger systems. At present, no UAVs have defensive aid suites, as they defeat the design for the system to be 'disposable'. However, the fact that many of these systems cost around £30million each makes them anything but disposable: a point proven by operations over Libya, where Predator aircraft were only deployed once the threat from the Libyan air defence systems had been removed. This means that UAVs are unlikely to be the first aircraft over the border in any intervention in the near future.
There is the issue of man versus machine. The public is happy for the pilot in the cockpit of an aircraft to make the decision, even though they have a much smaller screen, very little information beyond the targeting co-ordinates that they have received and also have to manage control of a high-performance aircraft. On the other hand, the man or woman remotely piloting the UAV is able to fully concentrate on the task at hand for up to eight hours at a time. They have a much larger screen and can, in some cases, visually identify individuals. He or she can then connect with various headquarters if further information is needed, and also has an image analyst (who is an expert in the norms and cultures of the people in theatre and has the latest intelligence on enemy movements) on hand, as well as being in control of an aircraft which can stay on station for up to twenty hours. This allows the pilot and his or her colleagues ample time to decide whether or not to fire. In many cases, insurgents will be followed home so that they can be picked up by local police or ground forces at a later date (this is particularly important for counter-insurgency operations, where locals may be coerced into joining the insurgency). The operators can also store the imagery to back up their decisions, should questions be raised about a particular incident. Their average tours will be three years as opposed to three months, greatly enhancing their knowledge of the theatre. The proliferation of 'eyes in the sky' during operations in both Iraq and Afghanistan has been so effective that in Libya there is great reluctance to drop weaponry without the imagery from a sensor available to provide real-time information of the situation on the ground and justification for a targeting decision. If anything, this is proving that it is more ethical to use remotely piloted aircraft than traditional combat aircraft.
A Switch to Full Autonomy?
The real nub of the issue lies with fully autonomous aerial vehicles, i.e. aircraft that can pick their own targets and drop weaponry with little or no intervention from a human being. There are already a number of different types of weapons that can autonomously seek out and destroy specific types of targets such as radar systems, anti-ship missiles, tanks, etc. The difference seems to be in the range of targets that an unmanned vehicle might be able to engage and whether it can specifically target humans. Regardless, there is little or no military appetite at present for fully autonomous systems - not least because a malfunctioning system would be a strategic disaster. The military, instead, prefer to graduate along a line of man-in-the-loop to man-on-the-loop, where some of the processing of information is managed by the machine, but a human ultimately makes the final decision. The problem most ethicists and indeed some military personnel have with this approach is that in most cases, unless there is information to explain how the machine arrived at the final answer, the human will not question the machine's decision. The most tragic example of this was during Operation Desert Storm, when a US Patriot Missile System mistakenly took an RAF Tornado for a missile and the operator shot it out of the sky.
Ultimately, whether a human can or will trump a machine is down to his understanding of the situation, prior training and skill. A human can, even with the right training, act immorally. One of the main issues concerning the use of Predator and Reaper UAVs in Pakistan is the lack of transparency with which the CIA is operating. Ultimately, any military intervention will require a political settlement. The challenge lies in knowing when to use force and when to refrain. UAVs or Remotely Piloted Aerial Systems are proving a valuable tool in providing greater security under both environments.
 For example, Richard Norton-Taylor and Rob Evans, 'The Terminators: Drone strikes prompt MoD to ponder ethics of killer robots', The Guardian, 17 April 2011 and 'Drone strikes herald "Terminator-like reality", MoD warns', The Telegraph, 18 April 2011
Associate Fellow - Specialist in Futures and Technology