R2D2 droid from Star Wars

Robot love: how to persuade humans to embrace machines

Man, meet machine. E&T investigates the tricks that make it easier for us humans to deal with robots that look like objects.

Imagine a typical reception room. As you sit down to read a magazine, you notice a cube-shaped footstool slowly but surely moving towards you on a curved path, stopping a little way in front of you. Hopping in place like an eager but polite butler, it seems to request your attention. With no response, it proceeds to wiggle to-and-fro, like a prospective dance partner shaking their hips. Lastly it sidles up to you, gently bumping your legs or feet. The message is clear: it wants you to use it.

The creation of Stanford University's Interaction Design Research Lab at the Centre for Design Research (CDR), the footstool, named the Mechanical Ottoman, is an example of the laboratory's efforts to design non-humanoid robots that largely rely on non-verbal cues to foster social interaction with people. "The lack of anthropomorphic features is an important constraint in my work," says Wendy Ju, one of the robot's designers and executive director of the lab, "I also think this is the natural progression of the robotic technology as we start to see products in the space between industrial robotics and consumer devices."

Though they're no Chappie, the humanoid robots in existence today are nothing if not technological marvels. There are soccer-playing, joke-telling, cabaret-performing androids out in the world right now, and efforts to create robots that can teach our children, comfort the lonely and nurse the sick are rapidly progressing. Our knowledge of engineering and cognition can only increase by continuing to aim for more humanlike robotic designs. But there's no need for all our robots to look like us, and plenty of reasons why they shouldn't.

For one, we can get creeped out by them. The more humanlike a robot, the more we generally hold positive feelings towards it, up to a certain point. On the continuum between human and artifice, there's a gulf where seeing something so similar to but not quite human can be off-putting and disturbing to the observer. Whether it's the skin, the voice or the way it moves, we can tell the robot is attempting but failing to pass for human and that gives us the heebie-jeebies. More scientifically, it means the robot has entered the Uncanny Valley, a term first coined by Masahiro Mori. And if you're designing robots that routinely need to work for or with people, that's a major stumbling block.

Aside from keeping us up at night, humanlike designs are also capable of disappointment. If and when we start regularly encountering humanoid robots, we'll expect a lot from them; a firm grasp of language, to run and jump, banter and answer complex questions on the fly. In other words, to behave human. And while robots have come a long way in the past few decades, most of those expectations can't be fulfilled with the technology we have available. That failure might turn people off from the very idea of using them at all. Luckily, people don't need robots to look humanlike in order to build lasting relationships and partnerships with them, but they will need them to decipher and utilise the unspoken rules of communication.

In addition to the Mechanical Ottoman (the name an allusion to the 'mechanical turk', a famed if fraudulent 18th century chess robot), Ju and her colleagues have worked on other everyday objects like trashcans and desk drawers, creating robotic versions of both.

Using an approach they've dubbed 'embodied design improvisation', they worked from the ground up, simulating the myriads of interactions people might have with one of their robots, first by story-boarding possible scenarios, then creating prototypes, all along the process recruiting the input of both artistic and scientific experts to suggest changes in movement and design. "People are really good at communicating a lot of information with our bodies, our orientation, our movements, and we're good at reading these signals," says Ju.

In the case of the Ottoman, trained actors and improvisers at one point were asked to role-play with a version manoeuvered by assistants with wooden dowels. They then used the lessons they learned to create an ottoman that would meet volunteers and both offer and eventually relinquish its services as a footrest. One such lesson is its steady curved movement towards a person; approaching someone too fast or too straight proved to be more unnerving than friendly.

Though the ottoman was remotely operated, the volunteers thought they were dealing with an autonomous robot, a clever manipulation known as the 'Wizard of Oz technique'. It's one of several cost-effective methods that allows researchers to understand how people might someday respond to such a robot. Some treated it like a pet, refusing to denigrate it with their feet, or declined its offer, but most (14 out of 20) eagerly placed their weary legs on top.''Every person understood from the get-go what the ottoman 'wanted', even if they had different levels of willingness to cooperate," Ju'notes, which she believes to be a testament to their design process.

The same was true of CDR's mobile trashcan when it was sent out into the open, with most people needing little prompting to feed it litter. A young boy used trash to play a game with it, one man exclaimed in joy when he saw the robot wiggle after having received trash, and another rushed to its rescue when it tipped over (mistakes generally endeared people to the robot and made them likely to believe that it was autonomous). Despite no instruction manual, people knew how to interact with the robot. And despite the lack of anything that could be remotely called human, they ascribed agency and desire to it. And for most, that only made them want to be more helpful towards it.

Ju's robots are intentionally tapping into a very strong human instinct. "We're hard-wired to make social attributions," says Jodi Forlizzi, Associate Professor in Design and Human Computer Interaction at Carnegie Mellon University, "You could have a box with no design and if it oriented its gaze towards someone, they would attribute intentionality."

Our default setting is to see meaning behind an action, a movement, or pattern. The same mental quirk that makes us stare up at the clouds and swear we can see an elephant predisposes us to see a mind behind a roving trashcan or ottoman. In our own lives, we can see this at work with our pets, whose usually instinctual purrs and barks at home are more often seen as detailed commentaries on the clothing we wear, the movies we watch, and the friends we bring by. Dogs and cats are intelligent, no doubt, but they're probably not that intelligent.

When it comes to robots already in the field, there are widespread anecdotes of attachments forged between robot and man. Soldiers have held funerals for destroyed bomb disposal units, others have pleaded with the robot's manufacturer to restore an original back to health rather than simply give them another.

More than a collection of parts, these robots were holding onto something irreplaceable, a robo-soul. Much like Ju's robots, you would be hard-pressed to recognise anything humanlike about these machines.

But though we're certainly driven to anthropomorphise robots, it doesn't mean that human-robot interactions will always go smoothly. As Forlizzi's and her team discovered, there are important considerations and adjustments that have to be made depending on how and where a robot is used.

In 2006, Forlizzi and a colleague trailed and interviewed hospital staff working with an autonomous robot called the TUG, produced by manufacturing company Aethon. Designed to deliver supplies like linen, medicine and food between different hospital units, TUG uses a pre-set map of the hospital and infrared and ultrasound sensors to navigate obstacles, And it coordinates with a wireless network to open doors, operate lifts and receive requests from the staff. It also stands 1.2m tall, is porcelain-coloured, and resembles a tool drawer on wheels.

Distinguishing between three main areas of the hospital, the medical unit (where surgery and cancer care occurred), the postpartum [postnatal] unit, and the support unit (where meals, linens and drugs were picked up), Forlizzi noticed a stark contrast in how the former two units treated the TUG.

While the staff working in the postpartum and support units loved the TUG for its attentiveness and time-saving capabilities, the medical staff grew to loathe the TUG as it announced its presence or asked for supplies to be loaded onto it. Some staff even staged their own revolt, at times kicking or swearing at the hapless bot for seemingly getting in their way. This was despite the fact that the TUG performed the same actions at both units.

"Context was everything," says Forlizzi. While the postnatal staff dealt with relatively healthy patients who didn't require vigilant care, the medical staff often dealt with time-intensive emergencies, or in the case of nurses, grew especially close to their cancer patients. Nurses saw the TUG's interruptions when they checked in on them as a personal affront. The chaotic nature of the medical unit also left their floor more cluttered with equipment which slowed down the TUG and left staff even more annoyed or offended. Her later published study recommended designing TUGs that could be customised for their environment: for instance, TUGs that interrupted more subtly with visual flashes, or additional pre-recorded voices to better attach people to them.

Cars in control

Forlizzi's observations point to the need to design robots that can not only communicate their goals to people, but that can also anticipate people's own motivations. That's an essential skill needed for the next generation of automobiles, with companies like Google and Uber scrambling to create so-called smart, even self-driving, cars over the next few years.

That we hardly think of these machines as robots is a cap in the feather for the automobile industry, whose marketing prowess has acclimatised drivers to advances like cruise control and collision avoidance systems without any of the panic that might accompany the launch of, say, a robotic nanny. As technology marches on though, cars themselves will gradually begin to replace humans as the guiding force behind the wheel – if there's a wheel at all – and with that huge a paradigm shift these cars will have to earn acceptance and cooperation from the former human driver.

Google's prototype driverless cars are made to do exactly that, according to the Oatmeal's Matthew Inman, who wrote about his experiences getting to ride along with the car last year. Everything from its cute, rounded exterior, invoking a Pixar creation, to its smooth driving experience is meant to create an attachment to both its passengers and other human drivers, a likely scenario considering how long it might take to convert all cars on the road to self-driving ones. "By turning self-driving cars into adorable Skynet Marshmallow Bumper Bots, Google hopes to spiritually disarm other drivers," Inman wrote.

But what if a smart car gets into an accident? Would that sap all the goodwill accumulated by its cuteness? Northwestern University researcher Adam Waytz and others published a study in 2014 looking at how passengers responded to different types of self-driving car, first by being driven around without incident, and secondly after an unavoidable accident. In a driving simulation, volunteers were either struck while driving a normally operated car, a silent self-driving car, or a self-driving car with its own name (Iris) and female voice. The personality-filled car significantly bolstered the volunteers' trust in it, and they were less likely to place blame on it for the accident.

Ju has also studied people's interactions with smart cars, and is wary of too great a focus on humanising them. "I worry that if we make the car too likable or friendly people will hesitate to intervene when they have to because they don't want to hurt the car's feelings," she says, "So we need to be thoughtful about what the right type of interaction is, instead of going the easy route of making everything friendly and likable."

During another simulation study, Ju and others paired drivers with a semi-autonomous car that would automatically brake. Drivers were most comfortable with the car when it explained – through voice – why it decided to stop ("obstacle ahead"), but their driving performance was best when the car explained both what action it was taking ("The car is braking") and why. In either scenario, it was important for the drivers to understand what was happening. "I don't think people will want to give up agency just because they don't have their hands on the steering wheel," Ju says. "There is some indication from our research that people feel responsible for how the vehicle behaves but also don't have any control, and this is an incredibly awkward and uncomfortable situation to be in."

Concerns like these will loom large as robots begin to leap off the prototype stage and into our homes and driveways. And they're problems we'll have to solve if we want to reach the full potential these technologies can offer. Medical robots do save time and effort, smart cars will save lives, and our feet would appreciate an attentive mobile helper, but only if people are willing to cooperate with them. We'll need to strike a balance between trustworthiness and effectiveness, between practicality and likeability.

Ju hopes to crowdsource further funding and recruit volunteers to take the place of her trained experts, in order to understand better how the average person will respond to her robots. And she hopes her prototypes inspire designers and the public to really think about what kinds of robots they'd like to see in the near future, and about how they themselves navigate the world. "I think a lot of the most interesting aspects about our work is how it shines a light on how rich and complicated our interactions are with one another before we even say a word," she says.

Image credits: The Picture Desk / Kobal / Corbis / Getty Images / Stanford University

    Recent articles

    Info Message

    Our sites use cookies to support some functionality, and to collect anonymous user data.

    Learn more about IET cookies and how to control them