Sophia human-like robot
Comment

Is unconscious bias behind our ‘uncanny valley’ fears?

Image credit: Toxawww/Dreamstime

Social robotics continues to evolve at an unprecedented speed, but a mistrust of human-looking robots could hinder the progress of this promising sector. What are the root causes of negative perceptions of humanoids?

The concept of the ‘uncanny valley’ was first introduced more than 50 years ago by robotics professor Masahiro Mori to describe the hypothetical relation between an object’s human-likeness and people’s emotional response to it. In the decades since the emergence of this concept, robotics has made giant leaps in anthropomorphism and natural language processing, leading to the creation of social robots that are increasingly similar to humans in both appearance and intelligence.

As society at large prepares to overcome unprecedented challenges, from the burnout of healthcare workers to the necessity to assist an ageing global population, help from social robots could be a blessing. In particular, humanoid robots have proved to be very effective in socially engaging people, which makes them a great resource for a variety of scenarios where on top of efficiently completing tasks, they also need to provide companionship and understanding.

However, the myth that human-likeness will make them innately unlikable could hinder their deployment and cause us to miss out on the benefits they offer.

A quick literature research on the uncanny valley is enough to prove the scarcity of data to scientifically assess this phenomenon. With an abundance of titles such as ‘The Uncanny Valley: Does it Exist?’ and ‘In Search of the Uncanny Valley’, we should wonder whether the supposed eeriness evoked by humanoids is an established fact, or an assumption based on the anxiety that all new technologies are, to a certain extent, bound to trigger.

Anecdotal evidence suggests that human-likeness might cause discomfort in some people, but efforts to experimentally test the uncanny valley have until now been fruitless.

The Godspeed questionnaire, developed by Bartneck, Kulic and Croft in 2009, is one of the most frequently quoted frameworks that should have provided us with measurable data on human-robot interactions. The test asks participants to rate how they feel towards robots based on their anthropomorphism, animacy, likeability, perceived intelligence and safety.

However, despite the many variables assessed and the wide number of studies that use the framework, the Godspeed questionnaire has ultimately proved insufficient to demonstrate the existence of the uncanny valley and the mechanisms according to which it arises and intensifies.

More recent studies, realising the futility of investigating the uncanny valley in a wide, general context, have tried to drill down on the root causes of the phenomenon in narrower fields. We now have data on the likeability and acceptability of humanoids in fields as diverse as zoomorphic robots, digital avatars and even technologically mediated voice.

These studies have provided some answers on the effects of human-likeness on the specific fields at hand, but offer no further clue in investigating the phenomenon of the uncanny valley at large.

Interestingly, scholars of the so-called visual turn in the humanities and social sciences have often remarked that, as humans, we tend to project our fears and preconceptions on the object of our observation. According to this interpretation, the gaze doesn’t just register factual evidence, but interprets it in light of our cultural biases.

This theory is very interesting in the context of human-robot interactions. As we are exposed to media portrayals of humanoid robots as dangerous, malicious and ready to overrule humans, it’s natural for us to project these feelings onto social robots regardless of their actual safety.

Just like unconscious bias can negatively impact our social interactions, bias against social robots can push us to miss out on the benefits these robots can deliver. This is not because of a well-documented psychological trigger, as supporters of the uncanny valley theory would have us believe, but because so many of us have internalised the narrative of the dangerous robot-killer.

Awakening Health, a joint venture between SingularityNET and Hanson Robotics, is changing the perception of social robots by introducing humanoid companions for healthcare, specifically designed to foster empathy through meaningful conversation and an ability to understand and react to human emotions.

After successfully launching Sophia, the first robot to acquire citizenship (pictured at the top of this page), Awakening Health is now focusing on Grace, a robotic nurse assistant designed to assist healthcare practitioners in elder care.

Awakening Health’s efforts are based on the idea that a human-like appearance can actually foster emotional connection and facilitate human-robot meaningful interactions. This core-mission is based on evidence from research projects like LovingAI, a ground-breaking study throwing new light on the possibility for robot-human emotional connection.

The initiative’s principal investigator, Dr Julia Mossbridge, posits that AI agents like Sophia and Grace can offer an experience of unconditional love to humans, addressing one of our deepest and most intimate needs – that of being listened to without fear of judgement.

More space should be granted to this aspect of AI research, which stimulates the public opinion to reflect on what happens when incredible discoveries are put at the service of society at large. As AI technology advances toward increasingly human-like and generally intelligent systems, we should be placing focus on real applications that connect humans with AI and robots on a deep level while delivering practical value, rather allowing our nebulous unconscious fears to guide our relationships with technology.

Ben Goertzel is CEO of SingularityNET.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles