Jeff Hawkins

The man with two brains

Jeff Hawkins reckons he has the key to a thinking machine.

Jeff Hawkins sired the personal digital assistant and today's smartphones as the founder of Palm Computing. But, frankly, this was never much of a priority for him. Even as a young engineer at Intel, he was nagging microprocessor pioneer Ted Hoff to let him investigate parallels between computing and the human brain.

Back in the 1970s, Hoff said no - a decision Hawkins now agrees with - but as science and theory advanced, he returned to his obsession. In 2002, Hawkins set up the Redwood Neuroscience Institute, concentrating on brain theory, and in 2004, launched the start-up Numenta, which is developing a computing architecture called Hierarchical Temporal Memory (HTM). At this year's International Solid State Circuits Conference (ISSCC), Hawkins described prototype HTM visual recognition systems, addressing a task that remains hugely challenging for computers. But it is one that the animal brain finds quite straightforward. It is one reason why a growing body of people, such as Hawkins, are looking more closely at the structure of the brain.

'Reverse engineering the brain' is one of the 14 'Grand Challenges' of the 21st Century identified by the US National Academy of Engineering (NAE). At the NAE launch, inventor Ray Kurzweil described the brain as "one important source of the algorithms and methods of intelligence [we will need in the future]". His enthusiasm for brain research reflects his overarching theory, the Law of Accelerated Returns. This cousin of Moore's Law states that once any branch of endeavour becomes an 'information technology', innovation advances exponentially. One of Kurzweil's favourite examples is the mapping of the human genome, and how it progressed in step with computing power.

"When it was announced in 1990, it wasn't a mainstream project - it was a renegade project. Mainstream sceptics said, 'There's no way you're gonna do this. You just had our best students and our best equipment and you've collected one ten-thousandth of the genome'. Halfway through the project, the sceptics were saying, 'I told you this wasn't gonna work. Here you are, seven-and-a-half years into this 15-year project and you've finished 1 per cent'," Kurzweil recalled.

"But that was actually right on schedule in terms of exponential progression: double 1 per cent seven more times, and you get 100 per cent."

One parallel between the human genome project and reverse engineering the brain is that, whereas many researchers are focusing on the detailed structure of neurons and synapses, Hawkins is pursuing a simpler, possibly quicker route to success. It is not dissimilar to the position that R Craig Venter took with gene sequencing, which was at odds with the more careful and slower approach used by most of the other researchers.

Intelligent design

Much of Hawkins' unified brain theory - unveiled in his 2004 book 'On Intelligence' - springs from a 1978 paper by physiologist Vernon Mountcastle on the neocortex, the napkin-sized sheet of folded layers that makes up about 60 per cent of the brain's volume and covers its other regions.

"Mountcastle proposed that each region of the cortex performs the same function," he told ISSCC. "What makes a visual area visual and a motor area motor is due solely to what that region is connected to. This was an incredible insight. It meant that we could look for a common algorithm underlying all the things the cortex does. The cortex is not 100 algorithms solving 100 problems, but one algorithm applied to many problems."

Hawkins has taken that hypothesis and applied it to 'the memory-prediction framework'. This is based on a hierarchical structure of neurons and connections within the layers of the neocortex that have a feedback-feedforward response to any stimuli.

In very simplified terms, when something 'new' is encountered, there is traffic back and forth between the various levels until it is classified, and this information is stored at a high level. Over time, a pattern is identified and stored through parallel interactions between neurons at multiple levels of the neocortex.

As learning progresses, stimuli are compared against these accumulated 'memories' from images to the sequence of notes in a song. In this context, brains constantly predict what they expect and respond to things out of place. The idea is that the brain performs feedback-feedforward operations based on stored 'invariant memories'.

In this model, memory and processing tend to merge as functions - a contrast with today's chips where the two are separate. And within this parallel hierarchy, information can move up and down, although the stored information tends to be more stable at the top. Raw data from the senses is much more fluid.

In his book, Hawkins identifies four elements that lie behind the structure of the neocortex. He claims the neocortex stores sequences of patterns and recalls the patterns auto-associatively - that is, the neocortex compares them against memories and themselves. On top of that, the neocortex stores patterns in an invariant form and in a hierarchy.

Another element is that this process involves the accumulation of these patterns over time: the temporal in HTM.

Hawkins acknowledges that he is making a few leaps of intuition in coming up with his structure of the neocortex, but claims there is evidence to support his view. His main argument is that this takes a top-down approach to understanding intelligence, rather than attempting the 'Herculean' bottom-up investigation of the brain's components. It is an approach that could get us somewhere more quickly. It could also lead nowhere: greater levels of detail may be necessary to produce the kind of results that Hawkins wants.

Hawkins' approach for a recipe for intelligent machines differs radically from those proposed by traditional AI or even architectures derived from models of neural networks. However, his approach does follow the belief of those working in brain modelling that for it to work the artificial mind needs to be 'embodied' in some way.

The first requirement is that the neocortex is given a set of senses to extract patterns. Those senses feed into a hierarchical memory system based on Hawkins' model of the cortex. The next step, Hawkins writes, is to "train the system as we teach children so that it builds its model of its world through its senses. It can then make analogies and predictions, and learn through observation".

At ISSCC, Hawkins disclosed details of making this happen in terms of an HTM 'vision-system' that recognises 48 line figures.

"Although not designed to be otherwise useful, this is a difficult problem, one that we believe would be insolvable using other methods," Hawkins says. "[We have] early results from a more sophisticated vision system under development. This one uses higher resolution grey-scale images. The system is trained by exposing a hierarchical memory system to moving images. Showing the images moving through time is essential for training."

Separate goals

Hawkins diverges from Ray Kurzweil in an important way. He does not think intelligent machines spawned through HTM need to be human-convergent. Their senses could be different, tuned to particular tasks we want them to accomplish, for example.

Brain-derived computer architectures could assume potentially huge importance. The brain is massively parallel - something that programmers have difficulties with in computer design. As parallelism is seen as fundamental to future gains in processing power, copying aspects of the brain may provide a way forward.

Perhaps more importantly, how the brain stores, accesses and recognises information leaves supercomputers for dead: Deep Blue can beat you at chess, but it can't discuss tactics.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close