Cute robots interfacing

Could you love one of these?

Image credit: Getty Images

What does it mean for a human to make an emotional connection to a piece of technology? Can we really feel for a machine and can the machine feel anything back?

 

‘Alexaaaaa ...’ The most common name shouted out loud in our sitting rooms isn’t a beloved child’s, parent’s or spouse’s, but belongs to a piece of technology. In these socially distanced times, we’re directly communicating less and less with each other, more and more through and with technologies. These interactions with inanimate machines aren’t purely transactional – asking them to do something more efficiently than a human and without whinging – but relational, as if they were part of the family. 

Robot companion manufacturers worldwide are aiming to achieve that relationship. With a camera in its nose, Kiki by Zoetic AI, a Silicon Valley start-up founded by two ex-Googlers, “recognises and reacts to your emotions, and develops a unique personality depending on your interactions”. Her manufacturers say “you are special to Kiki, and she will do her best to bring you joy every day”.

I once cuddled Paro, a seal-cub-sized and shaped early robot companion designed by Japanese engineer Takanori Shibata in 2004 which, I was told, would lower my anxiety and stress. Sadly, it provided me with no comfort and wriggled straight out of my arms. Miko, made by Mumbai-based Emotix, offers “intelligent conversation” with your child; Japanese-made Qoobo provides “comforting communication that warms your heart”; and Lovot – with 50 sensors in its body and three cameras in its head to construct a 180-degree map of the room – makes cooing sounds as it responds to your touch. The designers of this cuddly robot, which resembles a Teletubby and comes in the same range of primary colours, say its “only purpose is to be loved”.

Such companions are in high demand, with new potential highly engineered friends being introduced each year. Behind this expanding market is the development of Emotion AI, a subset of Artificial Intelligence (AI), which gathers and analyses human expressions and emotions. AI company Affectiva, which grew out of MIT’s pioneering Media Lab in Boston, USA, says it has the biggest emotion data repository in the world, having analysed over nine million faces in around 90 countries so far (there’s a live counter of ‘faces analysed’ on Affectiva’s website’s home page). Using a simple webcam, computer vision algorithms identify key landmarks on the human face, for example the edges of your eyebrows, the tip of your nose or the corners of your mouth. Machine-learning algorithms analyse pixels in those regions to classify facial expressions. Combinations of these facial expressions are then mapped to emotions. Deep learning, using artificial neural networks, allows diverse, unstructured data sets to be applied to algorithms. 

Affectiva declares it’s “on a mission to humanise technology with artificial emotional intelligence”. Using Affectiva’s software, any developer can embed this emotional intelligence into their apps and devices. Robots such as personal healthcare companion Mabu (“an engaging interface that blinks, makes eye contact, and uses AI to have intelligent, tailored conversations”) and Tega (a soft, squishy ‘social robot’ for young children) use it to understand the moods and expressions of the people with whom they interact.

Affectiva’s accumulated data goes beyond collecting micro-expressions. Its technology can also analyse the ‘acoustic prosodic’ features of our speech such as tempo, tone and volume. “We did this because we wanted to start delivering what are called multi-modal models, meaning models that look at both face and voice,” says Gabi Zijderveld, Affectiva’s chief marketing officer. “We believe that the more signals you can analyse, the more accurate you can be with your prediction of a human state.”

AI start-ups are still discovering new ways to mine the human mind. One, Neuro-ID, was founded based on research from Brigham Young University, Utah, suggesting that your mouse movements can reveal your emotions. With all this data, machines are becoming the experts at knowing how you feel. “They’re very good at analysing large amounts of data,” explains Professor Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy. “They can listen to voice inflections and start to recognise when those inflections correlate with stress or anger. Machines can analyse images and pick up subtleties in micro-expressions on humans’ faces that might happen even too fast for a person to recognise.”

However, there’s a leap between having the capability to produce such an emotionally intelligent machine, and us humans developing emotions and feelings for them. Such affection for inanimate objects is nothing new, says Julie Carpenter, research fellow in the Ethics and Emerging Sciences Group at California Polytechnic State University. She cites soldiers developing an attachment to the robots that help them defuse bombs as one example. “Disposal experts insert a very clear extension of themselves into the robot, much like we see people invest in game avatars. It’s a positive thing for the operator, because the users recognise the robot’s capabilities and limitations,” she says.

According to Olivia Chi of UBTech, producer of the AlphaMini biped robot, it’s the quality of the technology that ensures the depth of human interaction. “AlphaMini features active interaction and environmental sensing capabilities as well as comprehensive AI performances. All these make people feel that the robot has a life and a spirit. It is a family member who can bring vitality instead of just being a cold machine. It doesn’t have blood relationship with other members, but it is still an important part of the family.”

Looks are important. AlphaMini has “adorable” LCD eyes. “The mechanical-style robots can be easily related to industry or war, while cute robots make us feel close and want their company. Cute robots are more favoured among kids and more easily accepted as a family member,” says Chi. But Carpenter warns that which robot you’re attracted to is related to your culture and age. She conducted a study of responses to a ‘cute’ robot that asked for hugs. “People were very creeped out by that,” she says. “They knew it was a robot, so they thought it was being deceptive when asking for affection. But this was specific to adults in the US. Japanese interactions didn’t experience this.” The sense of possibly being deceived may be why a significant number of the emotion-reading robots are designed and described not as human-like, but as pets. “Our animal expectations are pretty low compared to human,” says Carpenter. “We expect some amount of intelligence and companionship, but it’s different.”

These robot pets aren’t the Fido and Felix of our children’s storybooks. They don’t have four legs nor a wet tongue, and don’t demand a daily walk. But one, at least, does have a tail. Qoobo is in the shape of a furry round cushion with no head or legs, but with a long fluffy tail. “We believe that non-verbal communication via the tail provides comfort to the user similar to how a real animal would. Depending on how the user pets Qoobo, the tail moves differently, and this creates a connection between the user and Qoobo,” says Saaya Okuda of Japanese creators Yukai Engineering. “Inside the cushion, there is an accelerometer to sense touch and actuators to control the movement of the tail. Through trial and error, we developed a mechanism that closely mimics the complex movements of an animal’s tail.” Yukai is shortly releasing a new companion, Petit Qoobo, which has a subtle heartbeat and a tail that reacts not just to touch, but to surrounding noises and voices. “These additional functions provide a realistic quality to the product and allow the user to more easily accept Petit Qoobo as a companion,” says Okuda.

Are Qoobo and other robot companions real friends? Can they do the sort of thing a human friend might? Could a machine be alert to the fact that we were feeling blue and encourage us to talk? “Even today we spend more time with our phone than with our mother or any other family member, so they just know and see you more than anything else,” according to Mihkel Jäätma, co-founder of London-based AI company RealEyes whose emotions database claims to rival Affectiva’s. Jäätma formed RealEyes with two (human) friends while doing his MBA at the University of Oxford. “The technology is also getting to a place where you can get these facial and vocal reactions, which can be very early cues if you are too stressed for too long. If you can pick those signals up early, you can do something about it.”

Some manufacturers who use this emotional AI to create our artificial friends go beyond claiming that we can rely upon and even love them. They suggest that the technology may be able to love us right back.

AlphaMini is described by UBTech as “affectionate” and “ready to befriend, entertain, teach, and communicate” through voice interaction and 4G LTE connectivity. Doe-eyed Lovot uses 50 sensors to process stimuli. Its Japanese manufacturers claim Lovot feels jealousy if its real-child companion targets love elsewhere, and demands a hug. Though at over £3,000 plus monthly software update charges, Lovot is an expensive playmate.

A clue to the success of Lovot, AlphaMini, Qoobo and other robot companions in attracting friends might well be that they are neither cat nor dog, bird nor beast. We can make them whatever we want them to be. It’s taking anthropomorphism to extremes. We can project our feelings on and in to our companion robot, and they can seem to read our feelings and project them right back. We think they care for us, which might be enough. It is essentially what philosopher Professor Daniel C Dennett of the Centre for Cognitive Studies at Tufts University, Massachusetts, calls “fake love”. He points out that these computers have a very different structure from the human brain. Currently, a computer’s structure is built top-down, bureaucratic, and composed of sub-routines on top of sub-routines. “There’s no emotion in this structure – it’s all controlled by edicts,” he says. In the future, Dennett believes, computers could be built more modelled on human brains, bottom-up.

Can we trust this new type of lab-built friend? If robots really can be trained to read our minds, shouldn’t we be worried? A close friend may suspect you’re feeling low and surreptitiously seek a little help for you. What if your robot companion, using its highly sophisticated algorithm-driven emotional intelligence, detects you have depression and tells Facebook? The advertisements on your timeline could be tailored accordingly. Is your non-human friend really the spy in your sitting room, prying into your emotions? Affectiva is careful to allow users to ‘opt out’. “We recognise that emotions are private and we always want to be transparent about how our technology works and how it’s being used. In these emotion-aware digital experiences we want people to have the option to opt out or turn off this emotion-sensing capability,” says Zijderveld. Professor Andrew McStay of Bangor University, author of ‘Emotional AI, The Rise of Empathic Media’, although cautioning against predicting a computer dystopia, does have concerns. “What we are talking about is 360-degree surveillance,” he says. “Who benefits from that?”

McStay also worries about the assumption that machines, informed by data and the power of algorithms, are expert at decoding our emotions. Most of the encoding of the data is based on the Ekman system of emotion classification developed in the 1970s and ’80s – the Facial Acting Coding System (FACS) – which has a code for every tiny movement of the face described as Action Units. (For example, sadness is 1+4+15 – Inner Brow Raiser, Brow Lowerer, Lip Corner Depressor.) A report in Psychological Science in the Public Interest journal questioned whether it’s possible to accurately interpret emotions simply by analysing a person’s face. You might scowl when you’re angry, but also when you’re concentrating or have a headache. “I think you should be very, very sceptical of the Ekman-based approach,” says McStay. “How is emotion being interpreted? Psychologists have a wide range of views on what emotions are, or even whether they exist. There is then the question of what a smile actually signifies, whether a low voice means we’re feeling low or we’ve just woken up and so on. Even the most ardent enthusiast of emotional AI would hesitate to say that these technologies can read emotions. Instead, they have limited ability to categorise expressions of emotion and respond appropriately.”

McStay is certain that companion robots will not become more important than our family and friends, but he says “in time artificial agents (whether they’re embodied or disembodied systems) will get to know us better. They will read, profile and respond to human disposition and physiological states. The task for engineers and technologists is to work with ethicists, and those in the human sciences, to ensure that citizens receive the best of these technologies.”

 

 

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles