Confused robot clutching head

Framework for robot ethics under development

Image credit: Dreamstime

A team of University of Hertfordshire researchers have been working on Empowerment, a concept intended to help robots protect and serve humans while also preserving their own safety, without requiring them to ‘understand’ the nuances of human ethics, interactions and language.

The question of how autonomous machines could be made to behave ethically and safely without human guidance was approached by science fiction author and academic Isaac Asimov in his 1942 short story ‘Runaround’. His Three Laws of Robotics state that a robot should not harm a human – either actively or passively – and should obey orders and protect its existence, as long as this does not cause harm.

Decades later, robots and other automated systems are due to become part of our everyday lives, from robot vacuum cleaners to self-driving cars. In some situations, these robots may be required to respond to complex ethical challenges, such as how to prioritise the safety of their owners.

“Public opinion seems to swing between enthusiasm for progress [in robotics] and downplaying any risks, to outright fear,” said Professor Daniel Polani, a professor of artificial intelligence involved with the research.

According to Professor Polani and his colleagues, while Asimov’s three laws are a useful starting point for discussion, they are open to misinterpretation by machines, given the ambiguity of human language: for instance, the concept of harm is context-specific.

“We realised that we could use different perspectives to create good robot behaviour, broadly in keeping with Asimov’s laws,” says Dr Christoph Salge, who is also involved with the study.

For many years, the University of Hertfordshire team have been developing a framework called Empowerment in order to address the difficulties involved with programming ethics into a robot. Rather than attempting to make a machine understand ethics, it is based on the optimisation of a machine’s number of different options. This was possible to mathematically program into a robot.

“Empowerment means being in a state where you have the greatest potential influence on the world you can perceive,” said Dr Salge. “So, for a simple robot, this might be getting safely back to its power station, and not getting stuck, which would limit its options for movement. For a more futuristic, human-like robot this would not just include movement, but could incorporate a variety of parameters, resulting in more human-like drives.”

“In a dangerous situation, the robot would try to keep the human alive and free from injury […] we don’t want to be oppressively protected by robots to minimise any chance of harm, we want to live in a world where robots maintain our Empowerment.”

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them