The next step in human evolution is to reverse engineer intelligence and to improve on it, explains E&T.
It may be an old fantasy, but the basic premise that we will one day engineer machines that are at least as smart as us and whose behaviour is indistinguishable from ours is, according to many roboticists, closer to reality than we might like to think.
Our understanding of the human brain, and our ability to 'reverse engineer' it - to analyse how it works and replicate its processes - is increasing dramatically, such that within a couple of decades we should know all about the mechanics of human intelligence and, crucially, learning, and be able to apply this to machines.
Ray Kurzweil, an inventor and writer and one of the leading thinkers on artificial intelligence (AI), believes a profound shift in our capacity to engineer intelligence is not far off. "Extending our intelligence by reverse engineering it, modelling it, simulating it... and modifying and extending it is the next step in [human] evolution," he states in his 2005 book 'The Singularity is Near'.
He predicted a vastly accelerated pace of technological change that, in a few decades, would lead to machines that would "encompass all human knowledge and proficiency, ultimately including the pattern-recognition powers, problem-solving skills and emotional and moral intelligence of the human brain itself"- a merger between technology and human biology that would enable us to transcend our biological limits.
Four years on, Kurzweil is just as optimistic. Computing speeds and memory are continuing to improve exponentially, he says, and at the same time "we are making very rapid gains in reverse-engineering the brain, which will be a key source of the software for human intelligence".
The prospect of super-intelligent machine-human hybrids will seem fantastical to some. Yet a few decades ago, the notion of AI of any kind seemed unlikely; we are now surrounded by it - in the cars we drive, the computers we use, the video games and toys our children play with.
Devices that respond to their environment and learn from it - the basis of intelligence - are key to everything from speech and text recognition software and spam filters to medical diagnostics and financial trading systems.
We take it all for granted and it seems quite benign. But things look different and a little more unsettling when you apply the thinking behind this AI revolution to the field of robotics. A robot may look like a collection of steel and wires, but watch how people cannot help but respond with empathy when one speaks to them or casts sad-looking eyes in their direction and you'll realise the line between human and non-human can seem distinctly opaque.
The most graphic example of this is Geminoid HI-1, the android robot created by Hiroshi Ishiguro at ATR Intelligent Robotics and Communication Laboratories near Kyoto, Japan. Geminoid HI-1 is Ishiguro's 'twin' - a near-perfect replica of himself that can mimic his movements and expressions using 46 small, air-powered pistons and air bladders. The air bladders expand and contract to emulate his breathing, fidgeting and other movements, such as turning or nodding of the head, all of which Ishiguro can control from a remote computer.
When he speaks, a system of infrared sensors transmits his lip movements to the robot, while a speaker broadcasts his voice. A built-in camera allows him to 'see' through its eyes.
Ishiguro uses his twin to lecture to students while he sits at the controls in his office at the other side of town. People often treat it as real, he says, even when they know it isn't. This is not just a gimmick: he wants to discover the essential human-like social cues that an android robot must possess for people to communicate with it naturally. The idea is to pave the way for the development of robots with more human-like qualities, to enable them to "integrate into human society as partners and have natural social relations with humans". Along the way, he hopes to discover more about what it is to be human.
Geminoid HI-1 is considered a ground-breaking development by many roboticists because it appears to have overcome a problem known as the 'uncanny valley', whereby we become increasingly comfortable with robots the more they resemble humans but start to get uncomfortable when they look close to human because the absence of particular movements or behaviour makes them resemble a moving corpse. Ishiguro's twin is sufficiently realistic in its mannerisms that people are not repulsed by it. Yet those who predict a near-future in which people daily interact with robots point out that a robot does not have to look much like a human for people instinctively to behave empathetically towards it.
Roboticists have had plenty of chances to observe this. Over the past 15 years, a whole family of 'social robots' has emerged in laboratories in Japan, the US and Europe. One of the earliest, Cog, was developed by a team led by Rodney Brooks at the Humanoid Robotics Group at the Massachusetts Institute of Technology (MIT). Cog, now retired, was programmed to follow movement and to respond to its sensory inputs - for example, it could 'learn' to manipulate objects such as a Slinky toy by adjusting the raising and lowering of its motor arms in response to the weight of the object.
Cynthia Breazeal of MIT's Media Lab, who helped design Cog, went on to develop a head-robot called Kismet, which could express basic emotions through judicious manipulation of its eyebrows, eyes, lips and ears.
Kismet had built-in video cameras, microphone and speech recognition software that enabled it to interact with a person at a level similar to that of a six-month-old child. Breazeal's latest project is Leonardo, a nondescript furry creature capable of articulated facial expression and with the apparent interactive sophistication of a five-year-old.
You might ask whether there's anything remarkable about the way people behave towards robots. Their expressiveness is superficial, even though people respond as if it were something deeper. "Sociable robots inspire feelings of connection not because of their intelligence or consciousness, but because of their ability to push Darwinian buttons in people - making eye contact, for example - that cause people to respond as though they were in relationship with them," says Sherry Turkle, director of the MIT's Initiative on Technology and Self.
People often respond to any mechanical toy with empathy and emotion. Consider the commercial success of robot pets such as the Tamagotchi, My Real Baby, Sony's AIBO dog and the Furby, whose owners - adult as well as children - develop genuine attachment to them, and grieve if they break.
Even on the battlefield, a place not known for sentiment, soldiers have been known to humanise the autonomous devices they use for reconnaissance or mine clearance. US troops in Afghanistan and Iraq often treat them as fellow soldiers, naming them, giving them rank or awarding them medals after successful missions.
Given our propensity for bonding with just about anything that moves, is the special allure of Leonardo, Kismet and their ilk all down to sophisticated aesthetics and creative programming? Is there anything going on in there that would have Capek, Asimov, Dick and other AI fantasists rubbing their hands in anticipation? Are modern robots still anything more than machines? Their creators certainly think so.