Rebranding the robot
Robotics scientists are counteracting a perceived negative fictional image of robots by re-imagining them as a new infant species, E&T's investigates.
The robot was a product of fiction. A literary invention of Karel Capek and his brother Josef, robots first appeared as characters in the play 'R.U.R' (Rossum's Universal Robots) - they were human in appearance but without emotions.
The term robot is taken from the Czech word robota meaning 'compulsory labour'. However, the mechanical robot we recognise was not the one that Capek premièred in his 1921 play - it was other artists in the 1920s and 1930s that turned the robot into a machine. Artists, industrialists and revolutionaries alike worshipped machines, as the machine was invariably the means to achieve a new order of economic abundance.
It is the automata of the 17th and 18th centuries that artificial intelligence (AI) researchers see as the ancestors to their discipline, with Capek and the avant garde artists of the period unacknowledged by the AI robotics enterprise. Yet artists' re-representations had an unintended consequence - the obsession with machines and their qualities in the early part of the 20th century, and their reconstitution of the robot as a machine, made it possible for later industrialists and scientists to imagine not merely machines that resembled humans, as with automata, but machines that had the capacity for humanity and inhumanity.
As a social anthropologist conducting fieldwork in robotics labs at MIT, I was surprised by the constant references to robotic fictions by researchers. Around robotic labs it was not uncommon to see journalists, documentary makers and even film staff. Perhaps because the robot was created in fiction, it can never quite be separated from fiction, regardless of what form the robot appears to take.
Science and technology has an interesting relationship with fiction. In fact, it has a genre dedicated to it: science fiction. The relations between AI and popular culture are well documented.
In the 1960s, scientist and author Arthur C Clarke approached Marvin Minsky at the recently established Artificial Intelligence Lab at MIT. When Clarke asked Minsky about his predictions for intelligent machines, his vision was re-expressed in the form of HAL: 9000 - the disembodied 'supercomputer' from '2001: A Space Odyssey' (1968). HAL: 9000 represented the pinnacle of supreme artificial intelligence (without the psychosis as the machine conspires to kill all the crew).
Along with the robot, the term robotics was coined, not by an engineer or industrialist, but by author Isaac Asimov who first used it in his 1940s short story 'Runaround' printed in 'Astounding Science Fiction'. In this book he also described the three laws of robotics:
- A robot must not injure a human being or, through inaction, allow a human being to come to harm;
- A robot must obey the orders given by human beings except where those orders would conflict with the First Law;
- A robot must protect its own existence, except where such protection would conflict with the First or Second Law.
Scientists and government officials in South Korea have even used Asimov's imagined rules in drawing up a Robot Ethics Charter, aimed at preventing robots from abusing humans - and vice versa.
Science and fiction
Robot films are undoubtedly perceived as cool and interesting and this can be to the benefit of roboticists, but there is another darker side to the relationship between the science and the fiction. In 2003 alone, Hollywood released three major films featuring robots and intelligent machines: 'The Matrix Reloaded' and 'The Matrix Revolutions', two sequels of 'The Matrix' which depicts a future dystopia where machines rule over humans and use them as batteries, and 'Terminator 3: The Rise of the Machines' which ends in nuclear annihilation of humanity. These continued a long line of films in which humans are in some way threatened by machines and robots.
Robotic scientists can never quite escape the presentation of their objects as threats to humans in some form. These issues may not necessarily be themes that AI roboticists directly write about in their academic journals, when describing their robots degrees of freedom or series elastic actuators, but they are there, providing a score for the meanings they attach to and generate about their technological artifacts.
With many likely to think of robots as destroyers of humanity, how do researchers overcome this cultural imagery? By a kind of re-branding of their own - by altering cultural visions of robots from destroyers to children.
"We model our robots on children," says one researcher. "We try to imagine the robot-human relationship as similar to that of child-carer one," explains another. Robotic scientists use models and theories of child development when thinking about, planning and implementing research ideas in robots.
Robotic scientists were keen to explain that Hollywood's negative depictions of robots shape cultural perception of them and the representation of robots as child-like is one strategy for reshaping the meanings assigned to robots. In the labs at MIT it is not uncommon to find children's toys and books surrounding robots. When robotic scientists want to test the visual or physical systems of their robots, they often use toys when interacting with them.
Scaffolding for a robotic upbringing
There are some technical benefits to this; for example, toys are often brightly coloured and salient for visual systems to recognise. There are other terms too that scientists use to restructure the relations between humans and machines. 'Scaffolding', a technical term used by sociable roboticists to describe the process of assisting the robot's development, is a practice in which relations between robot and human mimic that of adult and child. A person is meant to re-envisage his or her relationship with a machine in assisting it to grow and develop like a human child.
By making the robot appear child-like, it also adapts human perceptions of them. The use and re-branding of robots as children was more than a technical issue, it was deployed in the cultural fight against negative imagery of robotics. Branding is a term more associated with business than with robotic labs, but brands work by convincing consumers, that products are something they are not and cannot reasonably be (such as the equation between an expensive car and sexual attractiveness for example).
Roboticists too re-brand robots in peculiar child-like ways. Movies and literature present robots as super-abled machines but the current levels of technologies are inferior to fantasies projected in fiction. The automatic responses of many visitors to the lab would be to imagine robots to be more advanced than they actually were. This is yet another aspect of the uncomfortable relationship between robotic science and robotic fiction.
In fiction, robots are super-advanced and have extraordinary powers. Robots in labs, however, are simple, breakdown frequently, and are usually designed to do one or two specific things. Roboticists had to constantly steer visitors' perceptions of their machines along new lines. Robotic scientists compensate for this mismatch between what their robots can do and what people think they ought to do - they attempt to persuade visitors by presenting re-branded infantile meanings.
One such robot that is modelled on child-inspired philosophies is Radius, a sociable robot designed to interact with people in public spaces around MIT. Radius is a disembodied head that was moved from place to place around MIT using a non-motorised mobile vehicle. It has a grey face, large eyes, eyebrows and a mouth. Radius's large eyes are designed to give it a sympathetic and child-like façade. Its creator wants passers-by to interact and play with Radius using the toys that are scattered near it. And many do interact with Radius as they pass, cooing and attempting to interact with the creature as they might with a young child or animal.
When near the robots, I too interact with them, talking to them or playing with them using toys to get their attention and make them move their facial features. In these ways robotic scientists begin to culturally reconstruct visitors' perceptions of their robotic machines.
As well as getting people to lower their expectations of robots capabilities (and actually getting the human interlocutors to do all the work as they might when interacting with a child) there is another more insidious effect created by the re-branding of robots as children. This is that humans are encouraged to apply familial terms and relations to robots.
The infantilisation of robots has an unintended cultural effect. In assigning kinship designations to machines, machines are socialised and drawn into familial relations - the human and nonhuman are weaved together in a relation. The kinship terms of infant and child are ascribed to robotic machines with the intention to communicate and restructure relations between humans and nonhumans. If the making of sociable robots is a particular type of research activity in which humans and robots engage in 'social relations', the infantilisation of robots is a way of extending sociality to machines.
The double effect of these practices is to convince humans that they are in a relationship and, by extending human relations to machines, transform cultural perceptions of robots from threats to our new friends and kin.
Dr Kathleen Richardson is a departmental research associate at the Department of Social Anthropology, Cambridge University.