MRI images combined with an elegant mathematical model have allowed researchers to determine which letter a subject is looking at.
Researchers from Radboud University, Netherlands, used data from a functional MRI scanner to determine what a test subject is looking by ‘teaching’ a mathematical model how small volumes of 2x2x2mm from the brain scans – known as voxels – respond to individual pixels.
By combining all the information about the pixels from the voxels, it became possible to reconstruct the image viewed by the subject. The result was not a clear image, but a somewhat fuzzy speckle pattern. In this study, the researchers used hand-written letters.
“After this we did something new,” said lead researcher Marcel van Gerven at the Donders Institute for Brain, Cognition and Behaviour at Radboud. “We gave the model prior knowledge: we taught it what letters look like. This improved the recognition of the letters enormously.
“The model compares the letters to determine which one corresponds most exactly with the speckle image, and then pushes the results of the image towards that letter. The result was the actual letter, a true reconstruction.”
The journal Neuroimage has accepted the article, which will be published soon. A preliminary version of the article can be read online here.
“Our approach is similar to how we believe the brain itself combines prior knowledge with sensory information. For example, you can recognise the lines and curves in this article as letters only after you have learned to read,” said van Gerven.
“And this is exactly what we are looking for: models that show what is happening in the brain in a realistic fashion. We hope to improve the models to such an extent that we can also apply them to the working memory or to subjective experiences such as dreams or visualisations. Reconstructions indicate whether the model you have created approaches reality.”
“In our further research we will be working with a more powerful MRI scanner,” said Sanne Schoenmakers, who is working on a thesis about decoding thoughts.
“Due to the higher resolution of the scanner, we hope to be able to link the model to more detailed images. We are currently linking images of letters to 1200 voxels in the brain; with the more powerful scanner we will link images of faces to 15,000 voxels.”