Thrust levers on a Boeing 737-300

Way clear for decision-making robots

The ethical robotics debate has been rumbling on for decades. But now two UK universities believe they have a solution - allow robots to make decisions like humans.

On 1 November 2013 a US drone strike in the tribal areas of Pakistan led to the high-profile death of Taliban leader Hakimullah Mehsud. On a political level, the event waylaid any potential for Afghan peace talks, but also served to reinforce civilian concerns. Are we ready to face a world of machines that think for themselves?

Engineers in the field estimate that within 20 years we will be experiencing an autonomous onslaught of home-based robotics, cars and driverless aircraft, both unarmed and armed. Robotic systems have become the preferred solution in situations too arduous for human beings to operate in, whether in relation to the pace, repetition, environment, danger or the complexity of the task.

At the core of all of these systems is the concept of autonomy. The notion of taking the human entirely out of the link is a daunting notion; it seems inconceivable a robotic system could be held morally responsible in the event of an industrial disaster, a financial meltdown or a human fatality. But it is an ethical dilemma that engineers, scientists and lawyers have struggled to address since autonomous machines left the realms of science fiction and became real.

The overarching problem with putting trust in autonomy is that it is difficult to predict how a system may act. The precise reason engineers favour autonomy – because they no longer need to pre-program system commands – is the very same reason they cannot trust it. It is autonomous, so with a human out of the link, what will it decide to do and why?

The success of Google's driverless car – which the company has been testing on the public roads of those US states that will allow it – has gone some way to allaying public fears over autonomy and strengthen the sparse legislation that loosely surrounds the industry, but cars are only the tip of the autonomy iceberg.

Take areas such as assisted living or autonomous satellites. There is great potential for growth here, and yet a lack of legislation over the 'rules' of autonomy is hampering perception of the benefits. Nowhere is this more true than in the case of unmanned aerial vehicles (UAVs, or drones as they have come to be known). Armed unmanned vehicles represent a serious threat to civilians, and engineers are beginning to re-examine the process of decision-making these drones are required to undertake whilst in combat.

Insitu ScanEagle

In the case of UAVs, operators still retain a degree of control, maintaining complete control over high-level decisions. The ScanEagle is a small, low-cost, long-endurance unmanned aerial vehicle built by Insitu. It has 735,000 flight hours on the clock, which is unique in data terms, as it doesn't represent test hours, but actual flight hours in combat. Despite the inclusion of 'unmanned' in its title, the majority of ScanEagle's crucial decisions are still operator-controlled.

'Currently, an operator pre-plans a UAV's mission, designating the area remit for the vehicle,' says Andrew Hayes, director of advanced development at Insitu. 'Most of the responsibility still lies with the operator; they plan the route in and back from a designated area. The vehicle is autonomous in the sense that it returns to base if there is a communication error.'

Insitu has integrated the ScanEagle with autopilot systems to develop a series of 'fail-safe' processes, to help ensure that system failure would not cause a total loss of communications. 'UAVs will become fully autonomous when sensor technology gets to a high level of point of accuracy,' says Hayes. 'Although it's not quite the same as an armed vehicle, a good example of successful autonomy is the Google car.

'The way legislation stands – and the present capability of technology at the moment – means that currently there must always be a degree of operator oversight, especially with armed vehicles. Error could be catastrophic. But what we hope will become the norm is that UAVs will be able to choose the least terrible of the options available, allowing a certain amount of fail-safe. The UAV will have the ability to ask itself, 'are all the contingencies safe?' 'Can I stop a failure?' If not then the operator plans an abort.'

'The most important expense in defence is always manpower. If you can have a human simply monitoring rather than on the battlefield this reduces your risk of loss of manpower. If an algorithm can be developed that allows a vehicle five to ten options to choose from with the operator in the loop on a very basic level, that would be ideal.'

Research efforts

Realising that ideal are a handful of university laboratories across the UK. Those who are attempting to clarify the often opaque autonomous control at the heart of these systems include the University of Liverpool, the University of Bristol and the University of West England's Bristol Robotics Lab (BRL).

'We can see the stage that current configurable autonomy is at by looking at autopilot in planes,' says Michael Fisher, director of the University of Liverpool Centre for Autonomous Systems Technology. 'The software makes its own very low-level decisions, but it's hard to find out why it made a particular decision. Currently all high-level decisions are made by the operator. In our verification software the agent's decisions are made, in a sense, in the same way a human's are – in a binary manner. There is always a finite number of decisions available.'

The process that takes place is a series of behavioural analyses. The software models the behaviour of a pilot, relaying choices to an agent in charge of making decisions in the system. These decisions are framed in a 'real world' environment representing 'rules' or 'laws' that the agent must abide by when making a decision, for example, the rules of airspace in the case of the aviation industry.

'Take the case of an aircraft pilot: he or she might have any number of abstractions to choose from, and essentially implement the systems. The rational agent can make low-level decisions, for example, the engines will only function [above] 21°F – is it warm enough for the engines to work properly? The agent has its own beliefs and desires, its own intentions and short-term plans that it wishes to fulfil.

'UAVs currently have an operator to make human-level decisions – civilian ones are not allowed to fly out of sight. We deployed our agent-controlled UAV in the Shetlands after it had been verified according to the rules of the air. In theory, this agent could be applied to any situation where there are clear rules or laws that need to be followed.'

Currently, in the case of autopilot or a UAV,'there is always a human at the helm. 'There has been legislation put in place by the Civil Aviation Authority to support UAVs, but very little other guidance for other autonomous systems,' says Matt Webster, research associate at department of computer science at the University of Liverpool. 'An ISO standard, ISO 13482 was released earlier this year, but a regulatory body for all industries, including assisted living, is critical.'

Current legislation has become a moveable feast. This is no truer than in the case of assisted living, where verification for healthcare robots – which may be dispensing crucial doses of medicine to patients – is not as transparent as it could be.

'We see verification software as a good application in a circumstance where a robot is assisting a person in looking after themselves,' says Webster. 'But roboticists in the assisted living arena need to know to what specification to build their robots to.'

'Our aim is to engineer one step before that and look at verifying how a robot makes its choices, before it takes action. We want to eradicate the problem of dealing with a situation when a robot has already made a mistake.'

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close