The machinations of the mind

The machinations of the mind

Modelling the mind may provide a radical new direction in computer design.

If you wanted a reason why electronics engineers should look more closely at how the brain works, you only have to examine the near-term future of silicon-based electronics. The raw material chipmakers favour has reached such densities that its scaling is approaching the limits of traditional physics.

The problem is not just that semiconductor engineers are running out of ways in which to squeeze logic transistors into an ever decreasing space, even the ones they do make do not work as well as their predecessors: switches that you expect to work end up too far removed from their nominal ratings to be usable. Not all of them: the failed transistors could account for less than 1 per cent of the total on a chip. But that tiny number is enough to kill the chip stone dead.

Chip designers have been living with variability for years. It is just that the digital engineers did not notice. Analogue circuits, in particular, have demanded multiple simulation runs using Monte Carlo techniques to try to harden them against changes in the process. Changes that might not be visible in the digital domain, but which would kill an analogue part.

As process engineers start work on the 32nm, expected to go into production in 2010 and on 2012's successor at 22nm, they are finding the variations are getting so bad that even digital transistors suffer.

"It is one of the most critical issues for the industry and it is arriving very quickly," says Professor Asen Asenov of the University of Glasgow, who has been modelling variability in semiconductors for many years and is one of the world's foremost experts on the subject.

"With the processes we have today, it is possible to control transistors to the level needed by the design processes used today. But the devices are getting less controllable. On large-scale chips of the future, many components will fail," says Professor Steve Furber of the University of Manchester.

Dealing with failure

Furber believes we need to look at the way the brain works to deal with the problem of circuits that may fail intermittently or completely. After all, the brain deals with failure every second of every day. Neurons are continually dying in the brain, and a night out on the town helps some extra neurons on their way. Yet we continue to function, because the synapses reroute around the failures. We might forget some things but, overall, the brain keeps thinking.

Furber's team is investigating how neurons - which die at the rate of one per second in adults - co-operate in the brain to provide reasonably reliable results. It is a long-term project that, initially, revolves around building a neuron simulator using 20 ARM processors on a single piece of silicon. Each processor will host 1,000 simulations of neurons running in real time.

The SpiNNaker project, which got underway in late 2006, will use the simulations to work out different ways of representing data and logic in an environment where cells do not just die frequently and spontaneously, but have quite different speeds of response.

"What we want to do is capture biology's fault tolerance. The first thing I would suggest is that we throw binary numbers out," Furber argues. "Variability can be an advantage. If you take population of neurons and give them diverse characteristics and get them to represent the same parameter, you can get a form of population coding instead."

Another advantage of the brain's operation, say researchers such as Furber, is its energy efficiency. "It is hard to talk about power efficiency when we don't understand how the brain operates. But every way we look at it you come to the conclusion that the brain is far more power efficient than the CMOS technology we have," Furber says. "Now we get onto the surprising stuff: all of these components are really rather low performance. The key to the power efficiency is that you keep performance low and use massive parallelism. That is the thing we don't know how to do now."

Maybe by copying aspects of the brain's operation, we can get massive increases in compute power and come up with something far more energy efficient in the process.

What nobody yet knows is: what is the most-effective level at which to model the brain's processes. Jeff Hawkins of NeoCortex (see page 40) has taken a high-level approach. But, for people such as Furber, the neuron is the favourite level today. It is relatively straightforward to implement electronic circuits that mimic many neural functions, although no-one has built a detailed model except in software for simulation on a supercomputer. And even those models have gaps.

Neural simulation has a long history. The first generation of neural networks - as used by researchers such as Prof Igor Aleksander at Imperial College, London - concentrated on how different weights could be applied to an input signal and passed onto other neurons. However, these comparatively simple models of neuron behaviour have gradually been supplanted by systems that mimic ever more closely the behaviour of actual neurons.

It was the co-creator of modern digital circuit design, Stanford University's Professor Carver Mead, who coined the term 'neuromorphic devices' to describe these in silico counterparts to biological neurons. Having worked to automate digital-logic design, Mead switched attention to analogue processing and how neural structure could be mimicked in silicon.

However, Professor Leslie Smith of the University of Stirling pointed out at a 2007 lecture at the Institute for System-Level Integration in Scotland that other levels of modelling could be as effective. It may be that the structure and behaviour of the ion channels that drive electrochemical reactions inside each neuron are important - detail that would be lost at the level of a neuron emulation. Or researchers may be looking at the trees when they should concentrate on the forest and take into account the large-scale organisation of neurons inside the brain.

Neuroscientists have found that the brain contains many subsystems, each with subtly different arrangements of neurons and synapses. It is not all just grey matter in there.

Grey matter

It is easy to think of the brain as being undifferentiated grey matter but there are subtle distinctions between regions of the cerebrum that seem to play a role in how it processes information. Some of this superstructure may come from learning, some will come from genetics. But, with the tools available today, it remains difficult to probe how superstructure affects thought processes - the most precise information comes from brain slices arrayed on a microelectrode array. But that provides only a view of what happens in a two-dimensional network of neurons, taken from a three-dimensional portion of a brain.

However, it may not be necessary to understand everything about the brain to build more robust and intelligent computers. There are two possible outcomes from this kind of research. One is that computer scientists boil down the essential components of biological cognition to create a 'silicon brain' that has less evolutionary baggage than the human version.

The other is that functions in the brain could be used to drive the development of computers better suited to implementation in future generations of silicon - or whatever follows it. That either path is possible, encourages researchers to keep probing the mysteries of the brain.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close