A toy robot

Asimov's Three Laws of Robotics

Disenchantment with a fear-fuelled interpretation of science prompted Isaac Asimov to author the Three Laws of Robotics. Do they still hold up today?

The Three Laws of Robotics made their debut in a story by Isaac Asimov, entitled ‘Runaround’, first published in the March 1942 issue of Astounding Science Fiction magazine, edited by John W Campbell. Asimov was disenchanted with stock narratives about monstrous robots being destroyed when they turn on their makers. “I resented the Faustian interpretation of science,” he wrote in the foreword to a 1964 anthology of his stories. “Knowledge has its dangers, yes, but is the response to be a retreat from knowledge? My robots were machines designed by engineers, not pseudo-men created by blasphemers.”

According to Asimov, tools created for everyday human convenience tend to feature safety elements. Electrical wiring is insulated, pressure cookers have relief valves, and so on. Therefore, robots surely would be built with safeguards aimed at preventing injury to humans. Asimov coined the term ‘robotics’ and predicted ‘positronic brains’ made from platinum and iridium alloys. He was less interested in technical details than in how such brains might be programmed to make robots safe.

Debut of the Three Laws

‘Runaround’ is set in the year 2015. Donovan and Powell, agents for US Robots and Mechanical Men Inc, are reactivating a base on Mercury, the nearest planet to the Sun. Donovan sends a robot named Speedy to collect selenium from the Mercurian surface, in order to repair the cooling systems that protect Donovan and Powell against the planet’s ferocious temperatures.

Speedy fails to reach the selenium. Instead he circles his target zone. Donovan and Powell cannot communicate with him by radio because of intense solar radiation. Powell ticks off the Three Laws of Robotics, trying to work out Speedy’s problem. “One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm. Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

It becomes apparent that Speedy’s path is blocked by volcanic activity that endangers his existence, violating the Third Law. However, if he doesn’t collect the selenium, he will break the Second Law. Recent upgrades have enhanced the ‘weighting’ of the Third Law because Speedy’s a new and particularly expensive piece of kit. The Second and Third laws balance out, leaving Speedy unsure how to proceed.

Why doesn’t the First Law override the others? Donovan realises he forgot to tell Speedy why the selenium was so important. The robot is unaware of the trouble his masters are in. Powell risks venturing onto the planet’s surface in a spacesuit that can only survive the heat for a few minutes. By endangering himself within Speedy’s line of sight, he breaks the impasse. Speedy acts swiftly to obey the First Law and get Powell to safety. All is well after Speedy is issued with better instructions stressing the selenium’s importance.

Law-abiding robots

Professor Alan Winfield conducted a recent experiment at Bristol Robotics Laboratory (BRL) to explore the foothills of what might one day become ‘ethical programming’. A small wheeled bug called the A-robot is instructed to move toward a goal area at the end of a rectangular board marked out like a table-top miniature football pitch. It is ‘told’ that a hole exists in front of the goal. It must not fall into the hole, and it must not allow any other robot to fall in either. When a second H-robot (acting the role of proxy human) is introduced to the board, it does not know about the hole. It moves towards the goal, and blindly nears the danger zone. The A-robot, sensing the problem, interposes itself, preventing H-robot from falling into the hole.

A more complex variation was introduced, as Winfield explains: “We put in another H-robot, acting as a second proxy human. Now the A-robot has a problem. Which one should it save? We didn’t introduce any rules to solve this dilemma, but left it up to the A-robot to work out the best strategy.”

The results were intriguing. Sometimes it managed to save both H-robots from the hole, but at other times “it notices one H-robot, starts toward it but then notices the other and changes its mind. And the time lost dithering means the A-robot can’t prevent either of the H-robots from falling into the hole.”

Some of the sessions turned out like practice runs for an Asimov scenario. Winfield was influenced by his fiction as a teenager “but I didn’t think much about the Three Laws until five years ago. At first I thought it was impossible to make an ethical robot. Now I’ve changed my mind.”

He is at pains to stress “the A-robot is compelled to behave the way it does because it is programmed to do so. I call it an ethical zombie. It is not a full moral agent who can be held responsible for its actions. But even ethical zombie robots would be more useful than ones with no ethical programming at all.”

Number one  The First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm

When we consider a robot’s threat to humanity, as referenced in Isaac Asimov’s First Law of Robotics, our thoughts quickly escalate from passive and unfortunate industrial accidents to entirely deliberate battlefield scenarios. Is such a law applicable? Or is the whole point of warfare to laugh in the face of ethical hand-wringing?

On 25 January 1979, car worker Robert Williams gained the unfortunate distinction of becoming the first human to be killed by a robot at a Ford plant in Michigan. Two years later, a robot in a Kawasaki plant pushed worker Kenji Urada into a grinding machine with its hydraulic arm. Such accidents are rare, but a June 2008 report in the Economist noted that “over the years people have been crushed, hit on the head, welded and even had molten aluminium poured over them by robots.” Early robots, such as Unimate, were not equipped to halt their own movements in dangerous circumstances and were unable to sense a human presence within their working range.

Many modern robots are capable of making rudimentary decisions about safety. Motion sensors, soft padding, ‘pinch point’ elimination in the mechanics, camera vision, collision avoidance protocols and new spheres of behavioural programming open up the prospect of robots capable of interacting with humans Primitive echoes of The First Law of Robotics are being put into practice on a wide scale. However, there are robots whose specific purpose is to break that law and cause harm to humans.

Robots in the battlefield

By coincidence, the acronym for Lethal Autonomous Weapons Systems is LAWS. Munitions that steer themselves to pre-programmed targets have existed since the Second World War. Modern weapons platforms such as the Predator and Reaper Unmanned Aerial Vehicles (UAVs) are supervised by remote control, but some military institutions want to create weapons that can conduct sorties from start to finish, and even select and fire on targets, without direct human supervision.

In November 2012, US Deputy Defence Secretary Ashton Carter signed a directive for the development of “autonomous or semi-autonomous functions in weapon systems”. A subsequent solicitation for proposals called for weapons that can attack hostile targets on their own, “within specified rules of engagement” as interpreted by their on-board computers. America is not alone in this ambition. Human intervention is taken as read for the time being, but no legal framework precludes the implementation of full autonomy.

Ethical robot warriors?

In 2012 the campaigning organisation Human Rights Watch collaborated with experts at Harvard Law School on a report, ‘Losing Humanity: The Case Against Killer Robots.’ Such devices are not new to the battlefield, but the report says “their expanding role encroaches upon traditional human responsibilities more than ever before. Distinguishing between a fearful civilian and a threatening enemy combatant requires a soldier to understand the intentions behind a human’s actions.” This is likely to be “beyond anything that machine perception could possibly do”.

Robotic weapons would not be restrained by human sentiments such as compassion or empathy. Many roboticists argue that an advanced artificial intelligence (AI) could imitate some aspects of our reasoning, but Human Rights Watch insists that “even with such compliance mechanisms, autonomous weapons would lack the qualities necessary to meet the rules of international humanitarian law. Their observance often requires human judgement.”

Human Rights Watch is not alone in its concerns, but some people think that autonomous weapons would be beneficial. Professor Ronald Arkin, head of the Mobile Robot Laboratory at the Georgia Institute of Technology, has studied the relationship between robots and international combat laws.

“One of the biggest problems confronted by soldiers is the ‘fog of war’,” he says. “They often make appalling mistakes, such as firing on their own colleagues, or inadvertently killing civilians, because they don’t know what’s going on around them.” Arkin believes that autonomous systems “will have access to battlefield information greater than a human soldier is capable of managing”. He thinks dispassionate machines will make fewer lethal mistakes.

Conflicts in Iraq, Afghanistan and elsewhere have also shown that even the best soldiers can behave questionably, and sometimes savagely, when fear, prejudice or vengeful anger interferes with their judgement. Arkin thinks that robots could do better. “They need not understand the underlying moral ideas. They need only apply them. We don’t expect robots to have their own beliefs about lethal force, but simply to apply those already agreed by most of humanity.”

Arkin and his team are working on what he calls an ‘ethical governor’ for autonomous weapons, with programming based on international law and proper military practice. A killer robot can be programmed to fire only if it “satisfies all ethical constraints and minimises collateral damage in relation to the military necessity of its target”, Arkin believes. He accepts that responsibility for a robot’s use in battle must be made legally clear, “but I do not agree that it is unfeasible to use them.” Current international arguments about the deployment of Predators and other drones against supposed terrorist bases suggests that the world is not ready for robot warfare, but it seems to be coming anyway.

Breaking the First Law

Noel Sharkey, professor of AI and robotics at the University of Sheffield, chairs the International Committee for Robot Arms Control (ICRAC) founded in 2009 by a group of concerned AI and robotics experts, lawyers and arms control specialists who want an outright international ban on killer robots. These are his comments:
“Asimov’s Three Laws... were a plot device. For example, the First Law is impossible for a robot. If several humans are going to come to harm at the same time, the robot has to prioritise which ones to save, and ends up burning out its positronic brain. Many of Asimov’s short stories are about how the laws lead to strange behaviour.

“Obviously, the First Law precludes all robot weapons that would cause harm to any human. However, in the 1980s Asimov designed a fourth, or Zeroth Law, that dealt with the problem of robots killing some humans to save other humans: ‘A robot may not allow humanity to come to harm’. It can bypass the first law and kill a human to protect humanity as a whole. So it could kill someone threatening to unleash a weapon of mass destruction.

“This is derived from a type of philosophy called Utilitarianism, in which actions should cause happiness for the majority, or be directed towards ‘the greater good’. The problem comes when you have six people fighting against five. Should the robot intervene on behalf of the six?

“The single overriding moral issue is that a machine should never be delegated with the decision to kill a human. That’s a matter strictly for human responsibility and accountability, and it is far beyond the scope of what any AI system should be handling.”

Killer robots: the case in favour

  1. Self-preservation is not the foremost driver of a robot’s actions. It can be used in a self-sacrificing manner, instead of committing troops to dangerous situations.
  2. Robotic sensors are better equipped for battlefield observations than human senses, minimising the risk of lethal mistakes.
  3. Robots act without emotion. Lethal Autonomous Weapons Systems could reduce the likelihood of war crimes.
  4. Robots are not obsessed with ‘scenario fulfilment’, or the lust for glory exhibited by ambitious military leaders.
  5. The presence of robots with cameras and sensors on a battlefield would serve to discourage attempts at covert unethical behaviour on the ground.
  6. Autonomous weapons are precise, and can adapt to changing circumstances mid-flight to minimise civilian casualties.

Killer robots: the case against

  1. Autonomous weapons lower the threshold for war, increasing the likelihood of conflict, while the burden of war shifts from soldiers to civilians caught in crossfire.
  2. It is unclear who could be held responsible for war crimes resulting from autonomous weapons.
  3. Delegating the decision to machines of when to fire on a target eliminates the influence of empathy – an important check on killing.
  4. Robots would be unable to relate to humans and understand their intentions. They could not distinguish between targets and surrendering combatants.
  5. A dictatorial leader with autonomous weapons could suppress his population without fear of rebellion from his own soldiers.
  6. The decision to kill a human is too important to be delegated to robots. It will always be beyond the scope of any machine entity.

Number two  The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law

Would you be brave enough to tell a robot to make you smile? Imagine if it failed to interpret a seemingly simple command to crack a joke, but instead put electrodes on your face. And if such a basic command can go so fundamentally awry, what might happen with a more complex request? If harm to humans excludes face-shocking, does it also exclude economic hardship brought about by efficiency and automation?

In 1950, the computing pioneer Alan Turing suggested that if you have a conversation with machine intelligence, and you can’t tell the difference between how it responds and what you’d expect a person to say, then the machine is probably just as smart as you are. In June 2014, the University of Reading’s School of Systems Engineering organised a Turing Test at the Royal Society in London. Vladimir Veselov was part of a team who developed ‘Eugene Goostman’, an uncanny impersonation of a 13 year-old Ukrainian boy chatting by text. It fooled 11 of the 30 judges, at least for the short time it took to run the test.

Veselov has no illusions about the state of play. “We can’t really talk about a historic step in the development of artificial intelligence,” he cautions. “It’s the robot as a literary and psychological creation that passed the test.” The fact is that robots don’t need to sail through the Turing Test with flying colours. Already they are becoming ‘good enough’ at mimicry to substitute for humans in a wide range of tasks.

If current trends are anything to go by, the encroachment of computers, robots and ‘expert systems’ into the workplace puts the Second Law in conflict with the First. Robots are extremely good at complying with the orders they’re given, and it might be argued that this is ‘causing harm’ to humans. It’s not physical injury so much as economic displacement that’s doing the damage.

In 1933 the economist John Maynard Keynes predicted that technologies designed to take over our jobs “are outrunning the pace at which we can find new uses for human labour”. Exactly 80 years later, an interdisciplinary team at Oxford University compiled a report called ‘The Future of Employment: How Susceptible Are Jobs to Computerisation?’ It reached the startling conclusion that 47 per cent of all jobs in America are under threat.

Thousands of traditionally safe, middle-class job categories, such as receptionists, clerks, book-keepers and insurance assessors, fall under the positronic brain’s shadow. Over 44 per cent of firms that cut their wages bill after the financial crisis of 2008 did so by automation.

Be careful what you wish for

We all know how frustrating it is for credit rating decisions to be assessed by dumb computers. How will we feel when they actually are smart? ‘Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter’ has been signed by hundreds of AI experts, theorists and technology entrepreneurs. This includes Nick Bostrom, director of the Oxford Martin Programme on the Impacts of Future Technology at Oxford University, SpaceX chief Elon Musk, Apple co-founder Steve Wozniak and renowned physicist Stephen Hawking. It states: “Everything that civilisation has to offer is a product of human intelligence. We cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide.” The key recommendation is to “focus research not only on making AI more capable, but also on maximising the societal benefit. Our AI systems must do what we want them to do”.

Nick Bostrom invites us to consider the following scenario. Suppose that intelligent robots are instructed to make us smile. The people issuing these instructions imagine that human happiness will increase as a result. At first, the robots learn to tell jokes, and people smile. But Bostrom points out a dangerous flaw. Smart robots will be ‘optimisation systems’ designed to be as efficient as possible. “Robots charged with making us smile may conclude that it is more efficient to bundle us into cages and wire electrodes to the muscles of our faces.” Bostrom urges us to avoid such problems by making sure that “the goals of a robot are aligned with ours”. This, he believes, will be best achieved by teaching, rather than pre-programming. “The best way to ensure that AI will have a beneficial impact is to endow it with philanthropic values. Its top goal should be friendliness.”

One problem of concern is that the world’s leading digital entrepreneurs are perceived as the technological elite with excessive power and influence over global affairs. Bostrom counsels that anyone who develops an AI might not make it “generically philanthropic, but could instead give it the more limited goal of serving only some small group”. We can conclude that today’s digital systems are pretty good at following Asimov’s Second Law of Robotics, but we must take care in deciding what orders to give them.

Robotics rewriting the Three Laws

Computer scientist Robin Murphy of Texas A&M University and David Woods, professor of cognitive systems at Ohio State University, have revised Asimov’s Three Laws in an attempt to deal with ambiguities in the phrasing. Their paper ‘Beyond Asimov’ suggests:

  1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
    No matter how smart a robot might be, it’s a product, not a person. Its manufacturers, and the people deploying it, must take responsibility for its actions.
  2. A robot must respond to humans as appropriate for their roles.
    Mischievous instructions could cause havoc without breaking any of the Three Laws. In one interview, Asimov tweaked his wording. “A robot must obey orders given it by qualified personnel.” Blind obedience is not desirable.
  3. A robot must have with sufficient autonomy to protect its own existence as long as such protection provides smooth transfer of control to other agents consistent with the first and second laws.

No matter what the circumstances, humans must always be able to take over if they feel the need.

Number three  The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

Can a robot protect its own existence if it doesn’t know that it exists? Machines may appear to be human and even be capable of apparent social interactions, but can they ever become self-aware?

Robin Murphy and David Woods, authors of ‘Beyond Asimov’, are surprised by how little attention is paid to a robot’s self-preservation. “Because robots are expensive, you’d think designers would be motivated to incorporate some form of the Third Law into their products,” they say. “Many commercial robots lack the means to protect their owners’ investment.” Any application of the Third Law in today’s terms would be aimed at protecting humans from financial as well as bodily harm.

What will happen if robots do become sentient one day? That’s a big ‘if.’ There may be certain aspects of consciousness. In particular, sensations (known as qualia) that make us feel alive, such as the smell of frying bacon, the taste of coffee, the redness of a sunset or, indeed, pain, might never be replicated in a machine. Robots in the future certainly will act as if they are human, but will they ‘be aware’ of what they are doing? Many experts in artificial intelligence believe that because an existing physical object, our brain, does exhibit self-awareness, it must be only a matter of time before we discover how this happens, and how to replicate it in a machine.

Others argue that the mysteries of consciousness will not be solved any time soon. European specialists have collaborated on a project called PHRIENDS (Physical human-?robot interaction: dependability and safety). Coordinator Antonio Bicchi explains: “Typically the first safety implementation has been to segregate robots and keep them away from humans. This is no longer realistic, because there is a need for robots that can coexist with people and operate in the same space as we do.” However, this is not the same as sharing the world with other, conscious intelligences. “People talk a lot about sentient robots, but to me, these are something that just aren’t here yet. We will get closer to developing them, but the gap between robots and us will always remain.”

Does lack of self-awareness mean that the idea of a robot ‘protecting its own existence’ is meaningless? The most advanced robots are capable of social interactions with humans. We are even charmed by inexpensive robot toys that ‘talk’ to us, or imitate cute animal actions. Robots with realistic human features work as receptionists and entertainment hosts. There’s much talk of interactive sex dolls, too. When machines look as if they are alive, it’s easier for us to pretend they actually are.

Robots at the top end of the social research spectrum seem so alive that even the people who build them can’t help forming emotional attachments. At the Massachusetts Institute of Technology (MIT) Dr Cynthia Breazeal and her team developed Kismet, a robot head that could display facial expressions, as if showing emotions. It could ‘read’ human gestures, and responded by smiling, or looking downcast or surprised. When Kismet was deactivated at the end of the project, the team felt a sense of loss, despite knowing that it was not capable of ‘experiencing’ any of the emotions it imitated.

Kate Darling, an intellectual property researcher at the MIT Media Lab, is the author of a paper, ‘Extending Legal Rights to Social Robots’. “Humans form attachments to social robots that go well beyond our attachments to non-robotic objects,” she asserts. Most of us are not philosophers or computer experts, and “many people have trouble understanding that socially interactive robots don’t really have feelings”. That misunderstanding could cross the species divide with worrying consequences. “One reason that people could want to prevent the ‘abuse’ of robotic companions is the protection of societal values,” Darling says. “As it becomes increasingly difficult for children to fully grasp the difference between live pets and lifelike robots, we may want to teach them to act equally considerately towards both.”

Blurring the lines

Asimov’s novella ‘Bicentennial Man’ published in 1976 (the 200th anniversary of the American Declaration of Independence) describes the merging of biological and technological systems and the blurring of any significant distinctions between ‘human’ and ‘robot.’

Andrew, a robotic servant, wins the trust and affection of the family that owns him. He helps with their business affairs and is given money for clothes and biological enhancements to his body so that he can look more human. When Andrew is threatened by people who resent his efforts to become like them, his family help him present a legal challenge to be accepted as human. One by one, Andrew’s family companions die, but he survives because his robotic systems are essentially imperishable. Saddened by this, he undergoes an operation to degrade his positronic brain, thereby subjecting himself to a limited lifespan. He wins his case and dies aged 200, missing his family.

It’s a nice story, but the prospect of a sentimental robot wanting to become human is remote, and may well remain confined to science fiction. Of more urgent concern is the number of people who want to become more like robots. The ‘transhumanist’ movement is a disparate band of entrepreneurs and theorists working towards a change in the quality of human life by integrating us with machines. Our bodies could become stronger, fitter and more durable, while our brains would be augmented by powerful electronic memories and other aids, or so these people hope.

Transhumanism sounds like nonsense, until we remember that ‘wearable’ electronics are hitting the streets right now. Artificial limbs activated by nerve impulses are already available for those who need them. Damaged hearing, and even impaired vision, is already subject to machine interventions. By choice, we’ve modified our bodies throughout history, and there is every prospect of incorporating robotic elements to satisfy consumer demand.

Developing an interface between brain tissue and electronic memory systems may take some time, but could be possible. Of course the military establishment loves the idea of the ‘augmented soldier’ and has been working towards this idea for many years. A blurring of the distinction between technology and human biology is looming, for good or ill. We will become more like robots, while robots become more like us.

In 1983 the Bulgarian science fiction author Nikola Kesarovski suggested a final amendment to Asimov’s laws. ‘A robot must establish its identity as a robot in all cases.’ That might turn out to be harder than it sounds.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them