Research into machine consciousness is leading engineers to re-evaluate the relevance of philosophy.
Inventor Ray Kurzweil has a dream. And he intends to live to see it. He claims he takes a daily cocktail of pills to help make sure he is around to witness the creation of the first artificial brain: something he reckons will happen by the end of the 2020s.
One of Kurzweil's hopes is that this will make it possible to cheat death: we could upload our consciousness into the machine and remain conscious just as long as the computer receives power and maintenance. But the prospect raises a conundrum: would that machine actually be conscious? Would it think? How would we know? Even if it told us it could see and feel, would we believe it? Or would we consider it no more than a simulation of consciousness, where the lights are on but nobody is home?
Questions like these are leading engineers to consider whether it is time for the discipline to merge with philosophy, as engineering by itself will be unable to provide the answers. Philosophy as such still strug--gles with the questions. We don't really know what intelligence is and whether consciousness is effectively synonymous with it. But the quest to uncover consciousness in an artificial entity may provide clues that classic philosophical introspection has not managed to uncover.
The two schools of thought on whether machine consciousness is possible were summed up by cognitive scientist Marvin Minsky and physicist Roger Penrose. Minsky's argument runs that, if we assume the nervous system obeys the laws of physics and chemistry, then it should be possible to reproduce its behaviour with some physical device.
In his book 'The Emperor's New Mind', Penrose took the opposite view: that the laws of physics are, at present, insufficient to explain consciousness. Something else is needed. For him, that may mean only biological entities have all the components needed to produce consciousness. It would be denied to machines that attempted to replicate the conditions inside a brain with silicon and wires.
"The philosophical questions that drew me into trying to model intelligence led me into artificial intelligence," claimed Professor Nigel Shadbolt of the University of Southampton, at a recent seminar organised by the Royal Academy of Engineering. He is currently working on links between philosophy and the mechanics of the World Wide Web. "AI is the pursuit of philosophy by other means."
Shadbolt explained how science is separated from philosophy: "We should remember that not many centuries ago, with human knowledge, much of what was gathered was through methods of introspection." It was later on, starting with the work of Copernicus, that science was practised by testing hypotheses in experiments, Shadbolt noted. "However, Copernicus' empirical methods were somewhat imprecise. He was rather more of a reflective theoretician than some like to think."
Dr Ron Chrisley of the University of Sussex adds: "Philosophy is about trying to solve what are mainly conceptual problems: the more abstract questions. And people do make fun of philosophers for having struggled with simple issues for thousands of years and not having come up with an answer."
At the same time, science can find it has gaps. Chrisley jokes: "Take the question of what happens when a tree falls in a forest: does it make a noise if no-one hears? Put that question to a scientist, and they will come back and say they have worked it out for elm and birch but are having some trouble with the general case.
"Not all limitations on our scientific understanding are a matter of not having enough data," Chrisley suggests, pointing to the problem with consciousness. "Philosophers and non-philosophers alike recognise that even if we know a lot about the nervous system, some questions remain."
The problem is that work by cognitive and computer scientists has shown how to emulate intelligence. But what we don't know is whether, by extending those systems, they can become conscious, although some believe it is possible.
Professor Igor Aleksander of Imperial College, London says: "In the mid-1990s, Bernard Baars conceived how consciousness might work in a mechanistic way."
The result of that work, which was pushed on by Stan Franklin at the University of Memphis, was a software agent dubbed IDA - the Intelligent Distribution Agent. It was tested by the US Navy on a billeting problem: communicating with sailors on where they would go after a tour of duty. Franklin did not consider IDA to be conscious: it just emulated humans typically doing the job. But that did not matter to the sailors.
"When tests were done with the IDA, there was a general feeling that the billeter had become more caring and sympathetic than was the case previously [with human workers]," says Aleksander, but that does not automatically lead to consciousness.
"It is possible to make a system that seems to understand the world. But it is unlikely to tell us anything about our own consciousness," says Aleksander. "Franklin decided that to demonstrate machine consciousness it had to capture emotional concepts."
The zombie hunch
People may come to like what an intelligent machine does for them but, even if the machine itself feels conscious, humans may find it hard to accept that is the case. 'The zombie hunch' is something that lies purely in our own perception and affects our view of what consciousness is, explains Chrisley.
"It is possible for a creature to be just like you, but nobody is home: it is a zombie. To some people in the field that is not inconceivable," says Chrisley. "One way you might respond to this is to say that our notion of consciousness has to be at fault. If our notion of consciousness has that paradox, maybe we should look at a concept of consciousness itself."
This is where, Chrisley argues, engineering will not become more philosophical but philosophers should embrace science and engineering.
"It is difficult to see a change in our approach to consciousness coming about with simple concepts, such as acquiring more beliefs and philosophical argument. They need to be supplemented with something else," says Chrisley. "Some philosophical breakthroughs can only be brought about by people attempting to design and actually interact with the artefacts they build. I depart from a lot of philosophers who say there is no room for empirical thoughts, especially not something as empirical as building systems. But I think that is a modern misunderstanding of what philosophy is. I think [Emmanuel] Kant would be receptive to this. As well as [Giambattista] Vico, who thought you could only truly understand something if you build it."
There is room for a two-way exchange of ideas. "For some complex systems it might be necessary to incorporate the theorist or the philosopher into the design," says Chrisley.
Aleksander has included ideas from phenomenology into his own work on artificial intelligence. He suggests that ideas from people such as Husserl could prove useful in an engineering context: "When the technology hits a dead end, why not try glancing at philosophy?"
But, if the development of a thinking machine makes us question the nature of consciousness, where will it lead? Professor Owen Holland of the University of Essex has been working not just on machine consciousness in a project funded by the UK's Engineering and Physical Sciences Research Council, but also on bio-inspired robotic movement. It has come at a cost.
"A consequence of this is that I now feel more mechanical," says Holland. "I see less unity and become more distributed. And with more faults."
Chrisley agrees: "You could end up downplaying your own consciousness. Maybe the way to Zen enlightenment is to become no-one.
"However, the process will not be monotonic. In order to get a better concept, you might have to take some steps backwards: lose your grip on consciousness and only see yourself as a physical being. Then maybe you will realise that is not possible."
Some believe that the architecture of the Internet could lead to a form of machine consciousness. Maps of the Internet resemble the delicate interweaving of synapses and neurons in the brain. With millions of nodes added to the Internet every year, it is tempting to think that the World Wide Web could spawn a worldwide mind.
But in the same way that the Blue Brain is unlikely to function like a mammalian brain, many researchers into consciousness doubt that combining enough compute power will make something become accidentally self-aware.
"It is a fairly convenient concept that you can throw millions of neurons together and get consciousness," says Aleksander. But he believes that the best results in the field of consciousness will probably come from more detailed work with relatively small neural structures.
Chrisley agrees: "I don't place much faith in the complexity approach."