vol 6, issue 8

Drone warfare and the Geneva Convention

15 August 2011
By Sean Davies
Share |
An image of a futuristic war involving androids

The increased use of robots in warfare raises many moral, ethical and legal issues

Autonomous Weapon Systems are on the increase in modern war, but with them they bring serious moral, ethical and legal issues. Is the Geneva Convention about to be sidestepped?

The recent conflicts in Afghanistan, Iraq and Somalia have seen increased use of Autonomous Weapons Systems (AWS). But the use of these systems is changing. What were once seen as passive reconnaissance and surveillance drones are now changing to a strike role as semi-autonomous weapons platforms that raise several moral, ethical and legal concerns about their use.

'AWS clearly are game-changers in modern warfare, for better or worse,' explains Patrick Lin, director of ethics and emerging sciences group at California Polytechnic State University. 'Just look at their impact on US interventions in Afghanistan, Pakistan, Libya, and now Somalia: it's unclear whether the US would be so involved if not for the availability of AWS. Not only do they enable strikes of greater precision, but they also remove humans from the battlefield, which means less risk to our side.

'Of course, the worry here is that less risk also means lower barriers in entering conflicts. Sometimes, this means we can do the right thing more quickly, but it could also mean we're rushing into conflicts without enough forethought and without exhausting non-violent options.'

Many that would argue that very sentiment – lowering the barrier of risk for entering into a conflict – is the greatest reason for limiting the use of AWS. 'This is a concern if it removes an important disincentive to engaging in war – that is, to save war as the last resort – given the lower political risk in AWS-enabled war,' Lin adds.

'Some are also concerned with the psychological effects of Predator unmanned aerial vehicles (UAV). There is increased physical distance between the triggerman and the target, but the enhanced surveillance capabilities also could mean a greater intimacy between the two, as the triggerman watches a target for potentially hours and witnesses the devastating effects of his action, which sometimes includes the violent deaths of innocent children.'

Related to this is a perceived loss of honour, or at least redefinition of virtue, on the battlefield and this is important to maintain a code of ethics for a professional military, as opposed to a band of marauders.

One fact is clear; AWS are definitely being given more strike roles, moving on from more traditional duties of reconnaissance and surveillance, which only heightens the concerns of critics. 'The US military is careful to have a 'human in the loop', who makes the strike decision or at least can veto it, since we really can't trust AWS to always make the right decisions.' Lin adds.

'The exceptions to this cautious approach are when we use AWS as a last line of defence, such as patrolling the borders of Israel or South Korea, or on battleships to shoot down missiles. There, the risk-reward calculus seems to permit more relaxed use of AWS, not because we want to be aggressive but that it is a desperate situation.'

Wendell Wallach, scholar and consultant at Yale University's Interdisciplinary Centre for Bioethics and co-author of 'Moral Machines', agrees that the use of AWS will increase, but is less sanguine about the amount of autonomy they will be handed.

'I envisage we are going to move towards greater autonomy because we always seem to give in to the strategic advantage, even if the long-term consequences could be very dangerous,' he says. 'Even if we have more and more autonomy in robotic systems I don't think, even if these systems are going to get destroyed, we should allow them to initiate a kill order without a human directly activating that decision.'

Just war

AWS raise challenges to the long-held 'just war' theory, which is the basis for international laws of armed conflict. They include 'jus ad bellum', 'jus in bello', and 'jus post bellum' issues – or issues in going to war, in fighting a war, and after a war. Some critics worry that AWS gives such an overwhelming military advantage to one side that war is no longer fair and that this violates just-war theory. Others worry that AWS, today and in the future, simply cannot distinguish combatants from non-combatants, as required by the principle of discrimination built into international law.

'There could be an interesting tension between AWS and the Geneva and Hague Conventions,' Lin explains. 'For instance, perfidy or treacherous deceit – such as a soldier posing as a Red Cross worker – is prohibited by those Conventions, because it is an abuse of trust, which is already fragile between warring parties, but important in ending a fight and creating lasting peace.

'I'd like to explore the limits of this prohibition. Under one interpretation, it could ban the use of robotic insects or animals for surveillance or strikes, since those things are usually regarded as non-combatants, like the Red Cross worker, and attacks by such agents would be wholly unexpected and possibly treacherous.'

The International Committee of the Red Cross (ICRC) already prohibits weapons that are excessively lethal. This means a field mortality of more than 25 per cent – that is, no more than one in four soldiers may die on the battlefield as a result from being hit by a weapon and a hospital mortality of more than 5 per cent. Therefore, an AWS with perfect targeting capabilities, always (or even mostly) killing what it shoots at, would seem to violate this norm. 'This speaks to the principle of fairness in war, which may seem paradoxical or at least highly unintuitive to modern society,' Lin says.

Geneva Convention

The Geneva Conventions and their additional protocols are at the core of international humanitarian law, the body of international law that regulates the conduct of armed conflict and seeks to limit its effects. But when it was drafted back in 1949, AWS were still consigned to the pages of science fiction and today arguments are still raging about how it related to the use of robotic systems and robots themselves.

'It is another different type of weapon system and it interestingly remains a pretty open question,' Professor Ronald C Arkin, of Georgia Institute of Technology, and author of the best-selling book 'Governing Lethal Behaviour in Autonomous Robots' says.

'The answer is quite simply we don't know. The question needs to be asked about what are the bound and scope of these systems and do they require additional regulation or not. These discussions need to be undertaken and I believe they are being, although it is still very early. The ICRC are the folks that will potentially not create the rules but will bring the relevant parties to the table to discuss what, if anything, needs to be done.'

Robot singularity

Aside from the ethics of using these systems there is also the fear that robots that are quicker, more intelligent and more powerful than their human controllers could turn on their handlers. The mainstream media have been riddled with reports of a 'Terminator'-style Armageddon, but even sceptics such as Wallach dismiss this as fanciful. 'I am deeply sceptical that we would be seeing anywhere near that kind of technology in any foreseeable future,' Wallach says. 'That is a singularity, where we would be having systems that would be more intelligent than humanity. That is very much still in the realm of science fiction.

'Some of those technology thresholds may be technological ceilings, reaching the limit of how sophisticated autonomous vehicles could be. That said, I am seriously concerned that people will attribute abilities or faculties to these systems that they do not have.

'That, for me, is the big turning issue in all of this. Firstly, machines should not be put into the position of initiating kill orders and, secondly, there is a danger that we start to attribute more capabilities to these systems than they truly have.'

This raises the vexed issue of how much autonomy systems should be granted. 'I'm torn about this issue,' Lin admits. 'It would be great if no humans were at risk, but this seems implausible given human nature. So as long as AWS comply with existing ethical and legal norms, I'd be cautiously fine with it.'

Human in the loop

The current safety valve employed by militaries around the world when using AWS is the so-called 'human in the loop'. 'This is a very ambiguous phrase,' Arkin says. 'Ever since these systems started being discussed it was mentioned about having a 'human in the loop', but it is a question of where. There will always be a 'human in the loop' in the sense that a human being will declare war and send assets out to conduct warfare.

'In other cases, a human would be giving an intelligent robotics system missions such as 'take that building using whatever force is necessary'. Some people view 'human in the loop' as making a decision to pull the trigger and release the weapon, and the autonomous system not having that capability to make that decision.'

Given that interpretation there will always be a 'human in the loop' at some level because humans are the ones who are conducting the missions and engaging these assets. It means different things to different people.

Autonomy

The level of autonomy granted to these systems continues to trouble military experts and that impasse will become even more complex with the advent of robot swarm technology. 'It is a serious concern,' Wallach says. 'We do not have the technology, nor are we likely to have the technology in the future, that can discriminate regarding different targets. Perhaps the most serious issue is that these systems may be selecting their own targets. They really do not have the capacity for the kind of discrimination needed for this kind of action.'

Wallach is adamant that the limit on autonomous systems should be their capability of initiating a kill order. 'Quite simply, that should be the clear cut limit,' he says. 'No system should ever be capable of initialising a kill order and also, whatever action is being taken by the systems there should be a human who is in direct command to take responsibility and to be culpable and liable for those actions.'

Speed of thought

We have seen what can happen when computer systems move too quickly for a human to intervene or override it. The widespread blackouts in North America in 2003 or the 'flash crash' of the US stock market in 2010 were automated systems that went wrong. 'It is a trade-off that we need to consider,' Lin argues. 'To have greater efficiencies, control and awareness, we have to trust computers to do the job; yet we should not fully trust computers in critical jobs, since we have yet to create a perfect, error-free piece of complex software.

'To manage the risk, we would perhaps limit autonomous systems to specific applications – they would have 'functional morality' (as Wallach and Colin Allen call it in their recent book 'Moral Machines') needed for certain tasks, as opposed to a broader sense of ethics needed if they were to interact generally with society.'

But as Arkin points out, these systems can already outperform human beings in speed and the ability to see further and better. 'When you are up against an intelligent adversary, if they have the ability to destroy you before you have the ability to act then the slowness of the human communication and reasoning may mean the difference between survival or not,' he says. 'This fact pushes the decision-making towards the so-called tip of the spear and that will continue to happen as it has for quite some time now.'

The changing mode of modern warfare would seem to be problematic for the use of AWS. In the current 'insurgent' type warfare as opposed to traditional battlefields, it is more difficult to give clear orders.

'War is a chaotic business,' Lin says. 'A lack of situational awareness is made worse by the increasing tempo of war, but this is how autonomous systems can help us. Still, a conflict could arise: if a human orders a robot to storm a hideout and shoot the insurgents inside, but the robot detects mostly children and women, what should it do? It's important to follow orders, but with potentially greater situational awareness, a robot could have reason to refuse.'

The end game

Quite which road AWS will lead us is difficult to predict but is likely to be altogether different to any envisaged. 'Predictions that far into the future are usually wrong,' Lin concludes. 'We both underestimate (the power of the Internet) and overestimate (robot maids and flying cars) technology, which is natural since the odds of being right on the mark are slim to none. As Winston Churchill supposedly said, 'It is a mistake to try to look too far ahead. The chain of destiny can only be grasped one link at a time'.'

The fear is that the short-term strategic advantage being derived from turning over aspects of warfare to robotic systems is likely to be far outweighed by the long-term consequences. *

Further information

Share |
Related forum discussions
  Topic Replies  
forum comment Drone warfare and the Geneva Convention 2 Reply

Latest Issue

E&T cover image 1607

"As the dust settles after the referendum result, we consider what happens next. We also look forward to an international summer of sport."

E&T jobs

More jobs ▶

Subscribe

Choose the way you would like to access the latest news and developments in your field.

Subscribe to E&T