The latest news about robots, robotics
Posted: Nov 15, 2013
Better to know your enemy when taking on a killer robot
(Nanowerk News) The Campaign to Stop Killer Robots, a network of NGOs and academics, has done us all a valuable service by drawing attention to the development of unmanned systems that are able to kill without direct supervision by a human being.
The campaign wants the issue to be taken up by the UN Convention on Conventional Weapons and is calling on member states to discuss putting it on the agenda at a meeting in Geneva today.
However, the campaign has not been entirely clear about what Killer Robots are, despite implying that a ban on such robots would be its preference.
In order to understand the background to the campaign, it is necessary to engage with contemporary developments in military technology. Recent advances in sensor technology and satellite communications have made it possible to deploy remotely-piloted aerial vehicles (RPAVs), also colloquially known as drones. Contemporary RPAVs have sensors that enable them to record images and transmit them via a satellite link to an operator who might be thousands of miles away. Based on the information they receive from the system, operators can issue commands to the drone via the satellite link.
With a click on a joystick the operator can launch a deadly attack on a target. The Predator drone, probably the best known RPAV, is equipped with Hellfire missiles, which weigh around 100lbs and can take out armoured vehicles. RPAVs, it must be stressed, are not illegal under international law, but the fact that they essentially allow us to kill by remote control makes many feel uneasy.
This feeling of uneasiness is likely to grow stronger when one considers the future development of weapons where a human operator isn’t even needed to apply deadly force to a target.
Arguably, today’s RPAVs offer the blueprint for tomorrow’s autonomous weapons. For instance, the Taranis aircraft, currently being developed by BAE Systems, is a small stealth plane that can be programmed with a mission, fly into enemy territory and attack radar stations without assistance from a human operator.
Taranis is an unmanned combat aircraft system (UCAS) advanced technology demonstrator programme.
Unlike Taranis, Lockheed Martin’s X47-B, another unmanned plane, has not yet been equipped with a payload. However, the X47-B can already take off and land on an aircraft carrier without being remote-controlled by an operator. Taking the long view, systems like the Taranis and X47-B are the tip of the iceberg when it comes to automated weapons that do not require direct human supervision. Who knows where we will be in fifty years.
X-47B Unmanned Combat Air System Carrier.
This is precisely what troubles the Campaign to Stop Killer Robots. But one problem with the campaign’s strategy is that it targets systems which currently do not exist. Thus the notion of a killer robot remains obscure.
It has been established that killer robots can select and engage targets without an operator but that’s not actually something new – there are already systems in place that can do this. Many missile defence systems installed on warships, for example, are automated. They are legal under current international law and it’s not clear why such systems should be banned. For instance, they considerably reduce the burden on human operators because they are more capable of making faster calculations about potential threats than humans. In some cases, automated systems represent the next stage on from precision-guided weapons. And precision-guided weapons are surely preferable to the blunter tools of warfare.
The campaign probably has something different in mind. Maybe its call for a ban on autonomous weapons is a call for a ban on future weapons that could generate targeting decisions themselves. These would be weapons that could decide whether a particular object constitutes a legitimate military target in the first place. In this case, the machine itself would have to be capable of applying the legal criteria pertaining to targeting. And it is hard to see how a machine could do that.
Applying the law, especially insofar as the use of armed force is concerned, is not a matter of simple rule following that could be programmed into a machine. Rather, international law contains a number of grey areas, which require significant legal interpretation and judgement. What constitutes an enemy? What should you do if civilians are found to be in close range of your target? How do you balance risk against the goals of your mission? When would harm be excessive or disproportionate? It is inconceivable how a machine could make these judgements. It is already hard enough for human beings to do so.
For these reasons, the Campaign to Stop Killer Robots is right that a ban would be appropriate, should anyone be seriously interested in developing these weapons. Given contemporary developments, however, this could be in doubt. Governments and armies are interested in developing autonomous systems, such as the Taranis aircraft, that are capable of enforcing a targeting decision made by a human being without support from an operator.
There seems to be little enthusiasm in military and governmental circles for the development of weapons that can generate their own targeting decisions, even if this was technologically possible. Such systems would contravene a central principle of the armed forces: “command & control”. In any case, if it wants to make a real difference and get this issue on the international agenda, the Campaign to Stop Killer Robots needs to be clear about which systems it wants to see banned and why.
Source: By Alexander Leveringhaus, James Martin Fellow at University of Oxford, via The Conversation