People are more likely to bond with interactive robots if they are programmed to be less-than perfect, a new study has revealed.
The study by University of Lincoln researchers found that robots with similar cognitive biases as humans tend to be more preferred companions than those whose behaviour is too perfect.
Cognitive biases are faults in judgement inherent to human psychology that influence people’s behaviour and personalities. An overly rational robot could thus easily hit a barrier when trying to cooperate with a less rational human counterpart.
“Our research explores how we can make a robot’s interactive behaviour more familiar to humans, by introducing imperfections such as judgemental mistakes, wrong assumptions, expressing tiredness or boredom or getting overexcited,” explained Mriganka Biswas, a PhD researcher at the University of Lincoln who delivered the study together with his supervisor Dr John Murray.
“By developing these cognitive biases in the robots – and in turn making them as imperfect as humans – we have shown that flaws in their ‘characters’ help humans to understand, relate to and interact with the robots more easily.”
In their study, the Lincoln researchers abandoned the currently common approach to human-robot interaction and instead of a set of carefully structured behaviours introduced several cognitive biases, including misattribution of memory and empathy gap.
Two robots were programmed first to behave perfectly rationally and subsequently in a more human ‘imperfect’ way.
The first of the two robots, called Erwin (for Emotional Robot with Intelligent Network), made mistakes in remembering simple facts while the second, named Keepon, was expressing excessive happiness and sadness.
“The cognitive biases we introduced led to a more humanlike interaction process,” said Biswas. “We monitored how the participants responded to the robots and overwhelmingly found that they paid attention for longer and actually enjoyed the fact that a robot could make common mistakes, forget facts and express more extreme emotions, just as humans can.”
The research revealed that almost all human participants enjoyed interacting more with the ‘imperfect’ robots.
The research could help improve various applications of humanoid robots that are currently being explored, including those in nursing or as companions for elderly people or children with autism.
“The human perception of robots is often affected by science fiction. However, there is a very real conflict between this perception of superior and distant robots and the aim of human-robot interaction researchers,” Biswas said.
“A companion robot needs to be friendly and have the ability to recognise users’ emotions and needs and act accordingly. Despite this, robots used in previous research have lacked human characteristics so that users cannot relate - how can we interact with something that is more perfect than we are?”
The study was presented at the International Conference on Intelligent Robot Systems in Hamburg. In future, the researchers will focus on adding more human-like features to the robots to see whether such characteristics would further enhance the positive interaction.