How closely do we need electronics to impersonate the brain before they can pass the Turing Test?
When he suggested a test based on a written conversation to determine whether a computer is intelligent, Alan Turing was confident a machine would pass the exam within 50 years. By that time, in his view, an average interrogator would have no more than a 70 per cent chance of choosing human or computer correctly after five minutes of questioning. After almost 65 years of work, most artificial intelligence (AI) researchers remain unconvinced by any of the challengers' performances so far, including the efforts this year of one team's chatbot to emulate the text and email characteristics of a teenager from Ukraine.
So why has passing this test proved so difficult? When Turing suggested the game he believed it would simply be a matter of increasing the storage capacity and speed of computers. Both have increased dramatically but not in any way that suggests the machines are about to pass the test convincingly. Turing's judgement was probably hindered by prevailing attitudes to language and intelligence in the 1950s.
"People thought language had this inner structure and formal grammar to it," says Professor Bruce Edmonds of Manchester Metropolitan University, a computational social scientist with an interest in AI. "If you scan a vast corpus of text to check if there are any universal rules it turns out there are none in any of the major languages. It's a lot less logical than people of Turing's era might have thought.
"Turing tended to think of the brain as a kind of computer," Prof Edmonds adds. "It's true its organisation, with effort, can mimic computation, but actually it does something reasonably different to a computer."
In a paper written in collaboration with Mexican computer scientist Carlos Gershenson, Prof Edmonds argued that the key feature of human intelligence is its adaptability or its capacity to continuously learn. This ability is key to passing the Turing Test, where the player needs to constantly adapt to a conversational partner.
Based on Turing's ideas of a 'universal computing machine', today's computer excels at sequentially churning through logical processes. The kind of agility required for on-the-fly adaptation does not come naturally to them. "Reasoning and learning turn out to be two very different kinds of activities and one can learn about things that one can't necessarily perfectly reason about," says Prof Edmonds.
Starting from scratch
This inherent weakness in the prevailing architecture has prompted some researchers to ask whether there is a need to revisit the very foundations of modern computing. The human brain remains one of the most mysterious subjects in modern science. But huge strides have been made in the field of neuroscience in recent years and computer scientists are beginning to see potential in applying the lessons learned to their own field.
This urge is one of the driving forces behind the Human Brain Project, a €1bn flagship project funded by the EU directed at simulating the human brain in silicon within ten years. This infrastructure consists of several conventional supercomputers running digital models of rodent brain circuits, but the aim is to scale this up to a human brain. Running in parallel is a more extreme goal of using the lessons learned from the neuroscience arm of the project to build radically different brain-inspired computer technologies.
Two parallel efforts to build so-called neuromorphic computers are underway as part of the project. The first builds on the work of the SpiNNaker project led by University of Manchester's Professor Steve Furber, co-creator of the original ARM microprocessor. This project is building a massively parallel computing platform using thousands of multicore system-on-chips (SoCs). At the end of the first year of the project the team has constructed a 100,000 processor machine designed to mimic the behaviour of more than 25 million neurons in real-time simulations.
A less conventional approach is being taken by a project building on the work of the FACETS and BrainScaleS projects run by Professor Karlheinz Meier at the University of Heidelberg. The neuromorphic physical model (NM-PM) uses analogue circuitry rather than digital processors to build a biologically realistic silicon replica of a neuronal network. The team has two silicon wafers operating out of the 20 that will be needed to model one billion brain synapses and four million neurons. The high ratio of synapses will vastly improve on connectivity compared to other neuromorphic computing (NMC) systems.
"There are no microprocessors, there is no unit that exercises programs, there are no algorithms running on our system. It's a physical model," says Prof Meier. "The memory and compute units are the same, like the human brain or any other brain. Both functions are fulfilled by the cells."
The most important milestone for the sub-project has been the successful development of an NMC platform, Prof Meier claims. This provides remote access to both hardware systems and software tools for their configuration and operation, which make them accessible to non-experts.
"So far NMC chips have mostly been used by those who built them," says Prof Meier. "The NMC platform means uniform and well-documented user access to this completely new architecture."
The Human Brain Project is not the only major NMC project. In August, IBM Research announced a stamp-sized multi-core chip called TrueNorth featuring digital emulations of one million neurons and 256 million synapses. The chips can be tiled to create what IBM calls 'neurosynaptic supercomputers'.
TrueNorth uses a more simplified model of neuronal behaviour compared to SpiNNaker to reduce the processing load. It also has considerably fewer synaptic inputs per neuron than in a real brain. Both restrict the architecture in terms of its biological realism. But for a commercial entity like IBM the goal is different. For them, the aim is to reverse-engineer attractive features of human cognition that are applicable to real-world problems.
"It's very important to have this spectrum of approaches, because there's no ideal solution to NMC," says Prof Meier. The Human Brain Project is not wedded to any particular model of NMC, he adds, and the two strands of their research are likely to intersect in the coming years.
Flying before we can walk
Newly appointed head of the US National Science Foundation's biological sciences directorate and leading neuroscientist Professor James Olds thinks the work on NMC is interesting, but he's not convinced our understanding of the brain is sophisticated enough to properly mimic it.
"Without a doubt the complexity of the brain itself is probably central to its ability to be such a good learner. There are 1015 computational elements or connections in the brain and that makes it a 'machine' that has a level of complexity you don't see elsewhere in the universe," Olds contends. "As neuroscientists we are only just beginning to uncover these levels of complexity."
Prof Olds says he believes the proponents of NMC underestimate the potential efficacy of deep-learning applications on massively parallel supercomputers such as IBM's Watson or artificial neural networks – more loosely brain-inspired systems that run on von Neuman supercomputers – such as Google Brain.
"I can imagine an artificial intelligence passing the Turing Test, but not on the basis of having achieved the architecture of brains, rather on the basis of clever engineering," Prof Olds says. "We studied birds for a very long time and when we finally got round to building airplanes on a massive scale they didn't flap their wings."
Prof Meier is keen to stress that the Human Brain Project is not trying to build a like-for-like silicon replica of the human brain; indeed this is something for which the project has been criticised by neuroscientists who say the simulations are too coarse. They are using the simulations on supercomputers to isolate what Prof Meier calls "biological overhead" – brain features superfluous to the essential computational principles they are trying to emulate – and cull them.
They are not "building birds", he says but, with only one working example of a higher intelligence to hand, the human brain makes sense as a starting point. "People had to fail with flapping wings before they built real planes," Meier says. "No-one saw birds and said 'that doesn't work, let's build an Airbus A380'."
Disagreement over the most effective method of mimicking the brain hints at a more essential question: what exactly is it about the brain we are trying to mimic? The Turing Test has long fascinated computer scientists and philosophers and is certainly the most well-known AI test, but is it possible its popularity is in fact a form of self-flattery by cerebral academics who think their ability to reason is the pinnacle of human intelligence?
The game is a subtle test of social intelligence, says Manchester Metropolitan University's Prof Edmonds, a trait key to the evolutionary success of our species, but not necessarily a reliable marker for general intelligence. "When I say the Turing Test is not a good test for a general intelligence that's because there's no such thing," he says. "I suspect what we think of as general intelligence is really social intelligence."
For Prof Olds there are a whole host of behaviours exhibited by humans that are far more impressive than our social skills – on-the-fly abilities such as picking a familiar face out of a crowd or the ability to select a detour to save time. "Human brains exhibit fluid intelligence in addition to the social intelligence of the Turing Test," he says. "I might even argue that type of intelligence is a better key to what our human brains do when acting at their highest level."
For cognitive roboticist Professor Murray Shanahan, of Imperial College London, the Turing Test is not even a great test of our social abilities. "A huge number of social cues are to do with what the person in front of you is doing, whether you're talking emotional cues or any kind of cooperative activity," he says. For many AI researchers the disembodied nature of the Turing Test is a major flaw.
This line of thinking was thrust into the limelight in the 1980s by Australian roboticist Rodney Brooks, who went on to found leading robotics companies iRobot and Rethink Robotics. He championed the idea that AI researchers need to do away with abstract representations that enabled systems to reason about the world and start from the bottom up by focusing on the coupling of the sensory and motor systems that allows both animals and humans to interact with their environment.
"He really threw the cat among the pigeons," says Prof Shanahan, who adds that while many, including Brooks, have back-pedalled somewhat and representations have crept back in, the mark of this thinking is still firmly embedded in both AI and robotics.
Much like the path that evolution took, for Prof Shanahan developing the ability to learn about and interact with the environment has to come before any higher cognitive abilities. "I think we should start looking at things like animal behaviour and animal cognition, or even infant behaviour. There is all sorts of non-linguistic behaviour that demonstrates intelligence," he says. "These other kinds of capabilities that are grounded in interaction with the world are a prerequisite."
This is a strand of thinking that has certainly been taken on board by the Human Brain Project. Its neuro-robotics sub-project is on course to release its software platform to members of the consortium in mid-2015 and the first closed-loop simulations of a robot moving inside a virtual room controlled by a point neuron network have been run on the development system.
So where does this leave the Turing Test? Whether or not it's a good way to judge a putative general intelligence, is there any merit to it? "It's actually very productive in terms of getting people to focus on solutions that work," says Prof Edmonds. "Too often something difficult like AI retreats back into formalism and in essence, for a long time, AI did that."
At a press conference on the opening day of the Human Brain Project summit, director and neuroscientist Henry Markram said that one of the biggest challenges for the project was explaining to the public and press that it was not about some sci-fi conception of an artificial brain that one could have a conversation with.
Machines based on brain architecture will often be used to handle more abstract data, the University of Heidelberg's Prof Meier argues. Practical applications of cognitive computing lie in other information-'rich areas such as weather prediction, logistics and finance. "What humans can do is look at complex data and see relationships between data and infer what will happen next. The prediction-'making capability of the brain is a very powerful thing even when the data is incomplete or very noisy or we're in a situation we've never been in before," he explains.
Prof Shanahan says the test is "the icing on the cake" once all the other aspects of intelligence have been ticked. When pressed for a timescale he says: "I have revised my personal timescales down over the last few years. If you'd asked me 10 years ago I would have said 'I couldn't imagine, maybe 100 years'. Now I wouldn't be surprised if it happened in the next 20 years."
Prof Edmonds reluctantly puts his money on 20 to 30 years. Prof Meier declines: IT predictions are rarely right, he says.
The E&T podcast: modelling the nose's neural network
Edd Gent talks to Dr Michael Schmuker, whose research straddles chemistry, informatics and neuroscience, about using the neuromorphic physical model to implement a model of the neural network in the olfactory system, which is responsible for our sense of smell.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.