Mind reading

Reading the mind

Reading someone's dreams and knowing what your interlocutor will say before he speaks are no longer in the realm of dystopias.

Imagine a general brain-reading device that could reconstruct a picture of a person's visual experience at any moment in time," says Dr Jack Gallant, head of the research team and assistant professor in the Department of Psychology at the University of California, Berkeley.

In March, the Gallant Lab in Berkeley released groundbreaking research, taking a giant scientific and technological leap forward with their study on interpretation of brain activity. A team of four researchers produced an analytical tool that can accurately identify images from brain activity patterns.

The study, which has created a buzz in the scientific community, presents a new way of doing functional magnetic resonance imaging (fMRI) research, and a new way of decoding the brain. And, while researchers can't currently reconstruct a photograph seen by test subjects without knowing the images that the photograph is from, they expect to in the future.

"Our data suggest that there might potentially be enough information in brain activity signals, measured using fMRI, to do this in the future," says Kendrick Kay, a graduate student in Berkeley's psychology PhD program, one of the study's test subjects and first author of the study. "But given that we can't reconstruct a picture that the person saw, it should be obvious that we also can't reconstruct dreams or visual imagery at this time."

However, since neuro-scientists generally assume that all mental processes have a concrete neurobiological basis, as long as they have good measurements of brain activity and good computational models of the brain, it should be possible to decode the visual content of mental processes like dreams, memory, and imagery.

"The computational models in our study provide a functional account of visual perception," Kay says. "It is currently unknown whether processes like dreaming and imagination are realised in the brain in a way that is functionally similar to perception. If they are, then the techniques developed in our study should be directly applicable."

Mechanisms of vision

The research was carried out at the University of California by scientists Kay, Ryan Prenger, Thomas Naselaris and Jack Gallant. Gallant's lab has studied the neural mechanisms underlying vision and visual perception, particularly perception of natural scenes and selective attention, for more than ten years. The team uses four approaches to address these topics: fMRI; studies of neuro-logical patients with specific brain lesions; neurophysiology; and quantitative computational modeling.

The Gallant Lab focuses on several specific aspects of visual perception. First, they examine the role of an important human extrastriate visual area, V4, in scene analysis. These experiments involve studies of neurological V4 lesion patients and functional neuroimaging.

"We also use neurophysiological methods to investigate the function of early and intermediate cortical visual areas during natural vision," Gallant says. "These studies address both how the visual system responds to complex scenes during natural vision, and how it is affected by top-down processes, such as scene segmentation and grouping."

Another research method uses quantitative nonlinear system identification methods to objectively characterise the way natural scenes and complex objects are represented in intermediate visual areas, and to determine how these representations are modulated by attention. These experiments involve functional neuroimaging, neuro-physiology, and computational modeling of visual processing.

Brain activity measurement

The two-year study, which began in November 2005, was originally intended as an effort to build computational models of processing in the visual system of the brain. Predictive models are the gold standard of computational neuroscience and are critical for the long-term advancement of brain science and medicine.

It was during this process that researchers realised that this kind of modeling is closely related to efforts to decode the meaning of brain activity patterns. The study began with two of the scientists viewing a total of 1,750 images, chosen at random, while undergoing an fMRI brain scan. Each subject participated in five MR scanning sessions in which they were required to lie still in an MR scanner while viewing a series of photographs. Each session lasted about two hours.

The data obtained were used to construct mathematical descriptions of voxels in the brain - three-dimensional pixels that represent the blood flow response in a small region of the brain. The encoding model was implemented using computer code. Fitting the model to the data was computationally intensive since there were thousands of individual voxels.

"The computational model can predict the pattern of brain activity that would be elicited by any arbitrary photograph, including photos that were not in the initial set used to build the model," Gallant explains. Then each of the research subjects viewed a new set of 120 images - which were random and completely distinct from the first set of photos - while having their brains scanned. Combining the new data with the computational model created earlier, the scientists were able to identify, from brain activity alone, which images had been seen at each point in time.

Researchers picked out which of 120 images two volunteers had been viewing with 92 per cent and 72 per cent accuracy, compared with 0.8 per cent accuracy when left to chance.

"We simply look through the list of predicted activities to see which one is most similar to the observed activity, and that's our guess," Gallant says. "We were all pretty amazed at how well this works. It implies that there should be enough information in these brain activity signals to do much more."

Previous research has shown that fMRI can pick out brain activity associated with viewing different images, but only at a basic level (the category of the image, like 'face' or 'house') and only if brain activity associated with those had been previously measured. Now, it has been shown that brain imaging can uncover complex, arbitrary images.

Perceptual experiences

To test how feasible it might be for a machine to respond to brain activity triggered by just one event, the experiment was conducted again using data from single brain scans. The results - 51 per cent and 32 per cent accuracy for each of the volunteers - were good enough to suggest that it may be possible to decode perceptual experiences in real-time.

According to Kay, the accuracy levels are much higher than what other researchers might have expected to achieve using fMRI. Researchers ran testing on a third subject not long after data collection for the first two. Results for the third subject were similar in performance to the second one.

Expanding the decoding system out to more images would lower the accuracy level but still produce a relatively high success rate. The team estimates that if they used one billion images (roughly the number on Google), it would have a success rate of about 20 per cent. With that many images, Gallant says, the software is close to doing true image reconstruction, working out what you are seeing from scratch.

The model isn't perfect, so the decoder does make mistakes. When the decoder selects the wrong image, it tends to choose one that is structurally similar to what the person actually saw. For example, if the person actually saw a picture of a man standing in front of a house, the decoder might choose a picture of a statue placed in front of a museum.

"In general, as the set of potential images grows larger, more of them will be structurally similar to one another, so there are more chances for mistakes," Kay says. But researchers say that with enhanced computational models of the visual system and improved measurements of brain activity (such as higher signal-to-noise ratio and higher spatial resolution), accuracy will improve.

"There is no reason we shouldn't be able to solve this problem," Gallant says. "That's what we are working on now."

Some useful applications

This groundbreaking research opens the door to many future applications. In addition to their value as a basic research tool, brain decoders could be used to aid in diagnosis of diseases such as stroke and dementia, assess the effects of therapeutic interventions including drug and stem cell therapy, or used as the computational heart of a neural prosthesis. They could also be used to build a brain-machine interface (BMI).

"Decoding visual content is conceptually related to the neural-motor prosthesis BMI work that many labs are doing these days," Gallant says. "The main goal in that work is to build a decoder that can be used to drive a prosthetic arm or other device from brain activity."

As Gallant pointed out, there are significant differences between sensory and motor systems that impact the way a BMI system would be implemented. But ultimately, the statistical frameworks used for decoding in the sensory and motor domains are very similar, suggesting that a visual BMI might be feasible.

"This is an ongoing effort which will require many more studies," Gallant says.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close