Humanoid robot assistant BERT2 hands something to a human [Picture:Bristol Robotics Laboratory]

Humans favour error-prone robots - as long as they look sorry

Most people are more comfortable working with a robot that has human characteristics, even if it makes mistakes and takes longer to complete tasks, but can perceive ‘emotions’ that the machine doesn’t possess, British researchers have found.

Scientists from University College London and the University of Bristol used an experiment in which a humanoid assistive robot helped users to make an omelette to investigate how it could recover trust after making a mistake. They also wanted to find out how a robot can best communicate its erroneous behaviour to a human who is working with it.

The robot was responsible for passing eggs, salt and oil, but was programmed to sometimes drop one of the polystyrene eggs before attempting to make amends. Users reacted well to an apology from the robot that was able to communicate, and were particularly receptive to its sad facial expression. The researchers say this is likely to have reassured them that it ‘knew’ it had made a mistake.

At the end of the interaction, the communicative robot was programmed to ask participants whether they would give it the job of kitchen assistant, but they could only answer yes or no and were unable to qualify their answers.

Some were reluctant to answer and most looked uncomfortable. One person was even under the impression that the robot looked sad when he said ‘no’, when it had not been programmed to appear so. Another complained of emotional blackmail and a third went as far as to lie to the robot.

Adriana Hamacher, who conceived the study as part of her MSc in human-computer interaction at UCL, says it suggests that a communicative, expressive robot is preferable for the majority of users to a more efficient, less error prone one, even if it takes as much as 50 per cent longer to complete a task.

“We would suggest that, having seen it display human-like emotion when the egg dropped, many participants were now pre-conditioned to expect a similar reaction and therefore hesitated to say no; they were mindful of the possibility of a display of further human-like distress,” she said.

“Human-like attributes, such as regret, can be powerful tools in negating dissatisfaction but we must identify with care which specific traits we want to focus on and replicate,” she added. “If there are no ground rules then we may end up with robots with different personalities, just like the people designing them.”

The results of the research will be presented at the IEEE International Symposium on Robot and Human Interactive Communication taking place in New York City from 26 to 31 August.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close