Neurology: thinking outside the box
Image credit: Science Photo Library
How does the brain work? It is the most complex computer in the known universe yet its secrets have remained elusive and fascinating. Now techniques are being developed that are helping us to unravel its mysteries.
“The human brain has 100 billion neurons, each neuron connected to 10,000 other neurons. Sitting on your shoulders is the most complicated object in the known universe,” said American theoretical physicist Dr Michio Kaku during an interview in 2014 about his book ‘The Future of the Mind’, which explores what might be in store for our minds in the future and advancements in the field.
As long as humans have been roaming the Earth, people have sought to comprehend the brain despite its complexity, and although scholars and scientists have tried to decipher its codes for centuries, they have only scratched the surface. The quest to decode the human brain is more apparent than ever, with many initiatives in place to better understand, enhance and empower the mind.
In essence, the idea is to learn from the brain’s structure and learning processes to identify patterns. Technology should then be able to use those patterns to aid the human brain with practical applications.
Professor Alois Knoll leads the Neurorobotics Platform of the Human Brain Project and is its software director and vice-chair of its Science and Infrastructure Board. He is also a professor of Computer Science at the Department of Informatics of the Technische Universität München.
“Brain technology encompasses a broad range of approaches and purposes,” he says. “These include medically-oriented research, where advanced technology is used for innovative clinical applications, and technology developments in areas like AI, brain-derived (‘neuromorphic’) computing, and – in a most comprehensive way – neurorobotics, i.e., the combination of a body with a brain model.”
The EU’s flagship Human Brain Project is looking at all of the above by trying to change neuroscience and computing forever, by bringing diverse communities together and offering them a common platform to work together and understand the greatest computer of them all: the human brain.
Some of its questions include: Can we create a thinking machine? Can we build a model of the human brain inside a computer? How do our minds really work?
Brain Prosthetics can be split into two areas: Therapeutic neuroprostheses is the first with the most widely used neuroprosthetic device; the cochlear implant or hearing aid. Neuroprosthetics typically connects the nervous system to a device.
Brain-computer interfaces (BCIs), meanwhile, usually connect the brain (or nervous system) with a computer system.
Both seek the same outcome, however, such as restoring sight, hearing, movement, ability to communicate, and even cognitive function. Both use similar experimental methods and surgical techniques.
Brain Prosthetics is an area likely to see progress as a result of advanced stimulation techniques. In simple terms this is all about linking the human nervous system to computers, providing unprecedented control of artificial limbs and restoring lost sensory function in the case of stroke, blindness or deafness, for example.
“It’s a technique where typically electrodes are used to stimulate neurons, for example to better understand brain function or to create neuroprosthetic devices,” says Professor Alois Knoll. “Some examples of such neuroprostheses in the Human Brain Projects include research by Professor Gregoire Courtine in Geneva which has contributed to a new prosthesis that allows paralysed patients to walk again through targeted stimulation.
“A brain prosthesis for the blind based on targeted stimulation of the neurons in the visual areas is under development by Professor Pieter Roelfsema in Amsterdam,” he adds. “The prosthesis is connected to a camera that the person is wearing, and the image is created because stimulation of the right parts of the visual area in the brain can create so-called phosphenes, little dots in the person’s visual field.”
In addition, in early 2019 researchers from the University of California demonstrated a BCI that had the potential to help patients with speech impairment caused by neurological disorders.
It used high-density electrocorticography to tap neural activity from a patient’s brain and used deep-learning methods to synthesize speech. Other recent examples of note include working towards functional wrist and finger movements in a human with quadriplegia, neurorehabilitation of chronic stroke, individual finger control of the modular prosthetic limb and intracortical microstimulation to replace and augment vision. All of these have been the recipient of the Annual BCI Research Award from the BCI Award Foundation.
However, the biggest impediment to BCI technology at present is the lack of a sensor modality that provides safe, accurate and robust access to brain signals. This would greatly expand the range of functions that can be reliably carried out to a minimum standard and serve to move the potential case uses forward and into real-world use.
One of its basic premises is that to understand the brain – to see the cellular details, the morphology of cells, their connections or axons – then there is a need to slice open brains and have a look.
This is done at the Jülich Forschungszentrum institute by preserving then slicing the brain into 7,000 slices. It’s detailed work, but it generates huge amounts of data. The goal is then to piece the scanned images back together inside a computer and make a 3D ‘atlas’ of the brain.
“The idea is to establish a common platform for neuroscience research, medicine and advanced information technologies (high-performance computing, chip design, robotics),” Knoll says. “By linking these fields of research on an unprecedented scale, we aim to unlock synergy potential between them, leading to deeper insights into the complexity of the brain, new technology-driven approaches in neuromedicine, and innovations in computing, robotics and AI.”
The project looks at six technology-driven platform sub-projects: neuroinformatics, simulation, high-performance analytics and computing, neuromorphic computing, neurorobotics or medical informatics.
How can the data gleaned be interpreted and used? The key issue here is the limitation of the supercomputers. In particular, if it’s about simulating models that provide a similar number of neurons as we have in our brains then the computer needs to be able to absorb that amount of information. This is a big ask; the 7,000 initial brain slices are scanned in even finer detail, to arrive at a depth resolution of 1 micrometer. Each of those 1-micrometer scans becomes an image of 10 to 15 gigabytes in size. This kind of data needs the biggest computers in the world just to simulate a tiny fraction of a brain.
Practical uses are, however, already in action. “In the medical context, you need a lot of neuroscience data as a basis to get the correct targets for brain stimulation,” says Knoll. “In neurorobotics, we connect brain models designed by neuroscientists with virtual and physical robot bodies – to test and judge the models but also to draw inspiration for new robotic control systems.”
One area that is developing quickly is improving methods to read our brain activity and adjust or control it with brain stimulation techniques. Researchers at the Netherlands Institute for Neuroscience believe that these advances will lead to many new possibilities to restore brain functions that got lost or impaired as a result of an accident or disease. In the more distant future, similar technologies might also be applied to the healthy brain.
Human Brain Project scientist Professor Pieter Roelfsema focuses on this area, and the technology has already reached a stage in which electrodes on electronic chips implanted into the brain can read the ongoing activity of specific brain areas.
During a Human Brain Project open day last year, he said: “This technique allows, for instance, that patients with paralysis control a robotic arm or computer cursor with their ‘thoughts’. The reverse is also possible: with similar electronic chips, information can be directly transmitted to either the peripheral nerves or the brain. Cochlear prosthesis, for instance, use this principle to let people with impaired hearing hear again, while stimulation of deep brain areas can relieve symptoms of Parkinson’s disease.”
As knowledge of more complex brain mechanism increases and techniques to read from and write to the brain become more sophisticated, the possibilities for more advanced neurotechnology will rapidly increase as well. Progress and improvement can already be seen in the efficiency of therapeutic neuroprostheses.
Predictive brain technology is another area that is seeing advancements. In particular, mind mapping looks at the connections within the brain to investigate how they change when diseases such as Alzheimer’s and schizophrenia are present. From there, the idea is to work backwards and find out the early indications of such change.
Dr Peter Wagget, emerging technology director at IBM, worked on one such project. “With Alzheimer’s we identified a peptide that could tell us, up to 290 years in advance, whether someone is more likely to get Alzheimer’s or not,” he says. “This works basically by identifying a biological marker. With the Alzheimer’s the analytics of someone’s body gives much better indication of the later development of Alzheimer’s and the downstream risk of that.”
Related to this is Computational Brain Medicine, an emerging discipline that could be transformative in its quest to use computers to understand, diagnose, develop treatment, and monitor brain health.
Computational Brain Medicine (CBM)
Computational Brain Medicine (CBM) is a third example of brain tech in practice. It gives the ability to continuously monitor electrical activity in the brain for early signs of a seizure, for example, and delivers brief electrical pulses to reduce the risk of a fit in patients with refractory epilepsy.
“An example of computational brain medicine within the Human Brain Project is the work on personalized patient brain models using the simulation engine ‘The Virtual Brain’,” says Knoll. “It has led to a new method to improve the success rate of epilepsy surgeries and is being used to investigate network dynamics in other diseases as well.”
Other examples include monitoring how an athlete’s brain responds to an injury or how a particular solider has reacted to a given combat situation. For example, helmets and neck patches can measure the location, frequency and severity of concussions, helping to reduce long-term brain injuries.
The flip side of replicating the brain in order to improve things for humans is using the brain to boost computational technology. This means building computers with neuromorphic architecture. This replicates the way that the brain works rather than being set up conventionally – purely to process ever greater volumes of data.
The idea is to take the physics of neurons, copy it into silicon circuits, and come out with an array of 200,000 neurons that behave like neurons in the brain, but that are 10,000 times faster.
“With a traditional computer we would be looking to interrogate data to find out what a picture was, for example, but with neuromorphic architecture the processing is done more or less at the same time as the on boarding,” Wagget explains.
Wagget was involved in a proof of concept project where chips were put on a satellite and the data was processed on board. “What this means in practice is that you can get a real-time image from the satellite because the analytics have been done on the chip already. This is in contrast to the somewhat time-consuming way in which images have been processed thus far,” he says.
“It has a plethora of uses where a real-time reaction is needed,” Wagget adds. “It’s about the access to real-time processing that does not require a lot of power. You get access to information and context rather than just data.”
This kind of computing is still in its infancy, and the devices are no smarter than a fruit fly. However, scale is inevitable, and the intention is to replicate the computational ability of the brain. “We understand the architecture of the brain and if we can copy that to a computer then we will have cognitive agents that operate thousands of times faster than the flesh and blood cognitive agents – humans!” he says.
Clearly there is a lot to do in terms of figuring out mechanisms that govern information processing in the brain and in terms of the practicalities on the technology side. There is also an ethical aspect about how and when the human brain should be boosted or altered as well as what a computer should and should not be able to do.
“Ethical issues arising from these technologies need careful consideration,” Knoll agrees. “In the Human Brain Project we were the first to establish a dedicated Neuroethics Project, and we follow the RRI (Responsible Research and Innovation) approach in our work. Globally, our project and the worldwide neuroscience initiatives have begun to align on this within the Forum of the International Brain Initiative, of which we are a founding member.”
“We all know that with technology performance improves over time and this is no different. This is such an interesting area and we are reaching a point where the way that the brain functions can be crossed over with uses in real life,” Wagget concludes.
Deep-brain stimulators (DBS) can be used, among other things, to relieve the tremors of Parkinson’s disease. Essentially the connections in the brain circuits are similar to the electrical wiring in a house or car. If one circuit malfunctions, it can disrupt the entire system.
DBS delivers its electrical currents to precise brain locations responsible for movement, regulating the abnormal brain cell activity that causes symptoms such as tremor and gait issues. This has the effect of disrupting the disruption, restoring order and improving disabling symptoms.
Michael Okun, MD, national medical director at the Parkinson’s Foundation, says: “DBS is a therapy which uses a thin lead inserted into the brain to apply electrical stimulation to a specific circuit which has the net effect of improving symptoms for well selected patients with Parkinson’s disease.”
The technology is already in use and is FDA approved for Parkinson’s, epilepsy and essential tremor. “It has an FDA Humanitarian approval for OCD and dystonia. And there are a myriad of other neuropsychiatric diseases under study,” Okun adds.
The technology is still in development stage, and there are a great many known unknowns and unknown unknowns. “We do not know exactly how DBS works,” Okun says. “We know a lot about the biological changes in the brain in response to the electricity, but we know less about why it works. There are neurophysiological changes, neuropathological changes, neurovascular changes, neurochemical changes and changes in neuro-oscillations.”
DBS is also being used to target chronic neuropathic pain in cancer patients. The idea is to target the specific parts of the brain that are involved in pain perception, with the aim of masking the pain by producing other sensations such as buzzing or warmth.
There are many neurological issues that could be helped. For example, targeting other areas of the brain could help with symptoms that do not currently respond well to standard DBS targets: walking, balance, talking, and thinking.
The potential to look at different areas of the brain for different reasons and responses also gives rise to potential ethical issues. Prominently, there are concerns that DBS could become like plastic surgery. “We have proposed that DBS should only be performed to alleviate human suffering,” says Okun.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.