Real-life robot roomies
From taking part in space exploration to detonating bombs, robots seem to be everywhere. Will we find them in our living rooms next, asks E&T.
The interface between humans and machines is changing. From merely being tools that we manipulate with buttons and switches, our computers, PDAs and electronic toys will become socially responsive entities. The advent of machines with 'personality' began with a spectrum of modest innovations, from cars that welcome their owners with pretty lights and soothing sounds, through mobile phones that predict what their users want to text, or the seductive voices of satnavs guiding gullible drivers into muddy fields and axle-snapping ditches. Many devices today can learn the vocal and textual patterns of their owners, and the time is fast approaching when we will talk to machines almost as naturally as we talk to each other.
However, there's more to our conversational style than words alone. A vast amount of human signalling is non-verbal. The expressions on our faces and our physical gestures convey a great deal of information, especially about our emotional states. If machines are really going to take part in such complex exchanges, embodiment will be a key contributor. In other words, they'll have to look and behave less like machines and more like us. Welcome to the era of the 'socially interactive' robot, both as virtual screen interface and real-world physical object.
Your home: the Final Frontier
Dr Cynthia Breazeal directs the Personal Robots Group at the Massachusetts Institute of Technology (MIT) Media Lab, one of the world's leading investigators of socially interactive technology. She's been fascinated by robots ever since seeing 'Star Wars' at the age of ten.
An interest in space exploration early in her career led her initially to work on systems designed for alien terrains and dangerous environments: machines with wheels, or methods of locomotion inspired by the animal world; but it's the safe and familiar environment of our own homes that, Breazeal says, is the next and most challenging target for her research. "Robots have been into the deepest oceans. They've been to Mars. They're just starting to come into your home. You could think of your living room as their Final Frontier. It's really about bringing robots into our environment, rather than the other way around. Our world is constructed for our morphology: the fact that we walk on two legs, have two arms, and so forth."
When an underwater robot submarine or a planetary surface rover sets to work, it does so on the basis of close monitoring by a team of experts: as often as not, the people who designed and built it in the first place. For Breazeal and her colleagues, the priority is to build a new generation of robots for which "you don't have to read manuals before you interact with them". Traditionally, autonomous robots are designed to operate as independently as possible from us, often performing dangerous tasks that we are anxious to avoid, such as bomb disposal. Even when robots do enter the human workplace, it tends to be in the form of mail carts, vacuum cleaners, lawn mowers or other systems that are we can easily ignore. But the time is coming when robots will have to share close proximity with us.
For instance, in Japan, where the population is evolving toward the elderly end of the spectrum, one of the main drivers of robotics research is the hope that they will assist infirm people in and out of beds and baths, and even serve as 'companions' in some sense. This may represent a huge potential market, but as Breazeal warns, "the elderly are often reticent about picking up a new technology. The robots that they come into contact with shouldn't be too confusing or esoteric".
Breazeal identifies a subtle difference between the typical technologist's ambitions for functionality in a robot and the emotional expectations of ordinary people. "Existing frameworks don't adequately consider what's involved in developing robots that can learn from people who lack technical expertise, yet who bring a lifetime of experience in learning socially with others," she says. "A guide dog for the blind performs an essential function for that person. But on the other hand, people love their dogs. It's not just a question of helping people with their physical abilities, but also to appreciate that we are social and emotional creatures. We want to experience pleasure in interacting with things."
That pleasurable experience will depend on robots that can exhibit social and emotional intelligence - or at least, enough of those qualities to convince us that they really can empathise with us. Raised eyebrows, swivelling of the eyeballs, nodding and tilting of the head and a reasonably simple set of arm and hand movements can convey what we recognise as 'emotional' signals, such as happiness, sadness, alertness, even boredom, should we wish a robot to express such a thing.
Kismet, a world-famous project of Breazeal's, initiated in 1997 for her doctoral thesis, could express itself using a surprisingly limited arsenal of gestures. Breazeal explains that, far from being a fully articulated human figure, Kismet was essentially just a mechanical face with exaggerated features, "a robotic cartoon. It had eyebrows, surgical tubing for lips so that it could smile and frown, pink ears used for expression and showing arousal".
We are some years - or perhaps many decades - away from creating an artificial intelligence that can genuinely possess self-awareness, let alone 'feel' real emotions. We are hard-pressed to understand how consciousness functions in our own minds, let alone recreate it in a computer. Even so, we can build a pretty convincing robotic model of what they should look like on the outside.
Researchers agree (more or less) that six basic emotions seem to have been hard-wired into us by evolution: anger, disgust, fear, joy, sorrow, and surprise. Some theorists believe that our subtlest feelings are combinations of these simpler elements. The verbal and physical expressions that we employ to express emotions are similarly based on combinations of basic gestural elements. Emotions are not mere whims; they have evolved to fulfil genuine biological requirements. Positive emotions reward us for behaviours that bring us closer to some goal that enhances our chances of survival, while negative emotions warn of us threatening situations, while at the same time motivating us to change our circumstances.
Emotional or 'affective' processing in a robot is based, essentially, on relevance weighting. How close, or how far, does the current environment or set of circumstances reach towards fulfilling the goal of equilibrium between external stimuli and internal responses, or 'homeostasis'? By analysing individual emotional responses in such a functional way, they come within reach of software design.
Kismet was 'motivated' to engage with people, to respond to toys, and occasionally to rest. Each motivation or 'drive' could occupy an under-stimulated state, an overwhelmed state, or homeostasis. For instance, at the understimulated extreme of the 'social' drive, an affective state akin to sadness would result. Kismet appeared lonely and predisposed to seek face-to-face contact with people. Of course, people would then respond to Kismet's sorrowful expression and try to cheer it up. This increased engagement would then 'satiate' the social drive and bring it towards homeostasis. Kismet would then appear playful and engaged.
Likewise, the emotion of 'fear' could be replicated with moderately simple logic. Stimuli that were closer to the robot, that moved faster than normal, or appeared suddenly larger in its field of view were regarded as more intense than stimuli that were further away, slower, or smaller. (An object waved aggressively in your face would seem just as threatening to you.) When overwhelmed by too much sudden movement or noise, Kismet's fear response would cause it to shy away.
Genuine human emotion
Although Kismet's human playmates were not explicitly told to treat it as a genuinely socially aware creature, they couldn't help themselves, and naturally did so; and, in the process, established a feedback loop that improved the robot's ability to learn, because so much of the vocal and gestural data that it picked up from human behaviour was genuinely emotional, and not faked or stilted. "I intentionally created Kismet to provoke the kind of interactions a human adult and a baby might have," says Breazeal. "Babies learn fast, because adults treat them as social creatures who can learn."
At the machine's 'heart' lay four Motorola 68332 microprocessors running multi-threaded Lisp (LISt Processing language), a favourite tool for artificial intelligence researchers. Kismet's visual and auditory processing was performed by nine networked PCs running QNX, a real-time Unix operating system, while speech synthesis and vocal 'affective intent recognition' (Kismet's ability to discern the emotional expressions of its interlocutors) required their own systems too. Complex it may have been, but this was still, essentially, a computing system, not a mind. Even so, Breazeal discovered that switching Kismet off disturbed some of her colleagues. "It means that I really captured something in this robot that was special. That kind of reaction was critical to the robot's design and purpose."
Robots in a social context
In the Vol 4 #2 issue of E&T, we looked at the tricky problem of consciousness in robots. Until recently, the assumption was that robots would have to possess a self-awareness broadly similar to ours before they could be of much use in a social context. Surprisingly, this turns out not to be the case, because we are willing to contribute much more to the human-machine relationship than the robot. Like a conjurer who finds it easy to direct her audiences' attention because they want to be led that way, a robot designer can exploit fairly simple social gestures to remarkable effect, because we are naturally spring-loaded to find those gestures meaningful.
Neither Kismet nor any others among the current generation of socially interactive robots is self-aware, yet this may not matter, because, as Breazeal points out, "their purpose is to help us and please us, rather than the other way around". Their simulation of emotional responses mirrors those of biological entities in a way that seems satisfyingly plausible to us, even at today's limited level of sophistication.
For now, that's probably sufficient for the faint but discernible dawning of a gigantic new industry: one aimed at increasing the happiness that we derive from machines.