The power of AI in helping prevent blindness
Image credit: University of Leeds
While drug therapies take the credit for treating conditions such as macular degeneration, emerging screening techniques – such as AI-based systems – will also play a critical role.
Age-related macular degeneration (AMD) is the most common cause of sight loss in the UK, affecting more than 600,000 people, according to the Macular Society, a UK vision-loss charity.
The ability to control AMD and other causes of blindness, such as diabetic retinopathy, has improved hugely over the years. While fundamental research and new drug therapies are crucial, emerging techniques that scan and assess the eye also play an important role. (Furthermore, eye scans can be used to diagnose other conditions, such as ADHD and heart disease.)
Despite these advances, not all sight conditions can be treated. For instance, there are two forms of AMD, called ‘wet’ and ‘dry’. Wet AMD, caused by the growth of extra blood vessels, can be controlled by drug injections into the eye. Dry AMD – caused by the loss of nerve cells – cannot be treated.
“Right now, there’s nothing we can offer for late-stage dry AMD other than visual aids,” says Konstantinos Balaskas, a consultant ophthalmologist at Moorfields Eye Hospital in London. “There is currently no cure.”
However, emerging treatments for dry AMD are undergoing clinical trials. At least one is on the verge of receiving a licence from the US Food & Drug Administration (FDA), says Balaskas. While a treatment would come as a great relief for sufferers, it would create a massive increase in workload for healthcare providers, because measuring the progress of dry AMD by assessing eye scans is very time-consuming.
“It takes a human expert 43 minutes to assess a dry AMD retinal scan,” says Balaskas. “This would put a huge strain on clinical resources.”
Partly as a way to overcome this, Balaskas has been involved in a study that could make any potential treatment for dry AMD a practical option. He and his colleagues – from organisations including the NHS and University College London – have devised an AI-based ‘deep learning’ system that recognises the signs of dry AMD. The system was ‘trained’ on around 5,000 retinal scans that had earlier been classified by human experts. Once trained to recognise the signs of dry AMD, the system was validated on around 900 scans from a different data set. This so-called ‘external validation’ helps make the system more likely to work in practice, he says.
Not only did the AI-based system outperform human experts in assessing the scans, but it was far quicker – taking two seconds rather than 43 minutes.
An emerging drug therapy for dry AMD, called a complement inhibitor, is undergoing several clinical trials. It works by arresting the disease’s progress. Assessing its effects quickly, using AI, could be vital in rolling out an eventual treatment to sufferers.
“In a busy clinic it would be impossible to do manually,” says Balaskas. “This is the value of using AI.”
The potential emergence of a treatment for dry AMD makes the need for this kind of system more urgent, he adds. “The main benefit of the AI system would be in enabling this treatment to be delivered in a meaningful way,” he says. “It would allow clinicians to assess how well a treatment is working. Otherwise, it would be impossible.” A typical ‘wet AMD’ clinic might see 120 patients per day. As the prevalence of dry AMD is about the same, this could potentially overwhelm surgeries, he says.
As well as helping surgeries, the AI system would be useful for pharmaceutical companies that are developing potential treatments for dry AMD. (In a Lancet paper summarising the project, Balaskas declares interests in a number of pharmaceutical companies, including Apellis – which is developing a dry AMD treatment.)
Automated assessment of retinal scans could also help to treat wet AMD, he says. Here, it could help high-street opticians – who can administer retinal scans but may not have the expertise to interpret them. This could usher in a true ‘screening’ process for wet AMD (which does not currently exist) and identify signs of the disease at a very early stage, which is vital in preventing long-term sight damage.
In terms of rolling out the AI system, Balaskas says: “It could be quite quick – especially if it has immediate application. It should at least align with the availability of any treatment [for dry AMD].”
Speed is also of the essence at Duke University in the US, where engineers have developed a robotic imaging tool to make the job of taking eye scans much easier and faster.
The system, which combines an imaging scanner with a robotic arm, can take an optical coherence tomography (OCT) image of both eyes in about one minute. This task is usually performed by a trained technician using a large tabletop system, with the scanner placed very close to the eye. Here, the patient simply stands at about arm’s length from the scanner. 3D cameras help the robot to locate the patient, while smaller cameras in the robot arm position the scanner precisely on the eye.
The system can scan the macula (the central part of the retina) and cornea (the clear front part of the eye) – both sites where many eye diseases occur. OCT is routinely used to diagnose eye diseases such as glaucoma, diabetic retinopathy and AMD.
“You wouldn’t need advanced training to use the technology,” says Ryan McNabb, a research scientist in the Department of Ophthalmology at Duke University Medical Center. He is optimistic that something similar could be used in places such as optometrist offices, primary-care clinics (such as GP practices) and A&E departments.
The team has started to image the eyes of volunteers, to help refine the robot’s targeting ability. Next, they plan to image patients with retinal or corneal diseases, to assess how well the system can capture abnormalities.
“While this is a solution for image collection issues, we think it will pair well with advances in machine learning for OCT image interpretation,” says McNabb. “We’re bringing OCT to patients rather than limiting these tools to specialised clinics. I think it will make it much easier to help more people.” The research was presented in a paper in Nature Biomedical Engineering last year.
While tabletop scanners will remain the norm for carrying out eye scans, US-based researchers at Southern Methodist University, the Retina Foundation of the Southwest (a research institute) and AI company Balanced Media have created an unconventional way of analysing OCT data: by using a video game.
Balanced Media created the game, called ‘Eye in the Sky’, which has OCT retinal images embedded in its environment. In the course of predicting the path of alien forces in the game, players unconsciously trace lines used to perform diagnostic measurements of OCT retinal scans – this creates new datasets. These were integrated with Balanced’s AI platform and passed to Retina and SMU, helping them to train a machine-learning algorithm that can analyse OCT images more accurately and precisely.
The use of AI helps to analyse millions of individual retinal images to detect patterns and pathologies that would previously have been impossible or impractical.
“We are seeing substantial improvements to image analysis,” explains Karl Csaky, CEO of the Retina Foundation of the Southwest. “This technology decreases costs and increases the number of images processed – as well as the accuracy and precision of image processing.”
While it makes sense to use eye scans to analyse eye health, these techniques can also reveal other conditions. Research led by the University of Leeds has used automated analysis of retinal scans to predict the risk of a heart attack with the help of an AI-based system that detects changes to tiny blood vessels in the retina. The system has made predictions about heart health with 70-80 per cent accuracy.
“If the eyes are translucent windows into our vascular system, maybe we can find information there that relates to other parts of the body,” says Alex Frangi, professor of computational medicine at the university and corresponding author of a paper in Nature Machine Intelligence describing the research.
He says that changes in retinal blood vessels are likely to be mirrored by other parts of the vasculature, such as capillaries in the brain and those in myocardial muscles. “Especially in early phases of cardio-vascular disease, the changes in micro-vasculature are more or less common in many parts of the body,” he adds. “That underpins what we’ve done.”
The researchers trained an AI system using deep learning, to analyse both retinal scans and cardiac scans from 5,000 patients sourced from the UK Biobank database. The system identified links between retinal blood vessels and changes in the patient’s heart. After the training phase, it could estimate the size of the heart’s left ventricle using only information from the retinal scan.
An enlarged left ventricle is linked to a higher risk of heart disease and is typically identified through diagnostic tests such as echocardiography or magnetic resonance imaging of the heart.
The ‘trained’ system was externally validated using a different dataset called AREDS, which numbered around 3,000 people. Correlating this with actual cases of myocardial infarction which happened within 12 months of the original retinal scan saw a prediction accuracy of more than 70 per cent.
Frangi says the team has filed for a patent on the system. The next stage is to seek industry partners for possible licensing, as well as looking to incorporate it into hardware, “as it’s mainly software”.
While the system is designed to work using only the retinal scan data, it could be augmented to make a more accurate prediction. In a setting such as at a retail optician’s, this might be the patient’s sex and age; in an eye clinic, it might add information such as body mass index.
Frangi says this kind of system could have a radical effect on health by identifying potential heart disease in eye clinics and at high-street opticians. However, he warns it could be difficult to implement, as opticians may be uncomfortable warning customers that they are at risk of a heart attack. If this can be overcome, though, it could help to identify heart problems more simply and cheaply. “In future, a system like an eye scanner could be at the front end of the whole process,” he says.
Chemists, drug developers and medics are rightly praised for the therapies they are developing to overcome blindness. However, their efforts would largely be for nothing without the hardware, such as AI-based scanning systems, that will make these emerging therapies a practical reality.
Australian researchers have used retinal scans to help diagnose two behavioural conditions, autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD). The conditions can be difficult to distinguish, which is important because their treatments are different. (ADHD is treated using medication, autism typically is not.)
“ASD and ADHD often share similar traits, so diagnosing them can be lengthy and complicated,” explains Paul Constable, a research optometrist at Flinders University. “Our research aims to improve this.”
Using an electroretinogram (ERG), a diagnostic test that measures the electrical activity of the retina in response to light, the researchers found that children with ADHD showed higher overall ERG energy, while a technique called discrete wavelet transform (DWT) analysis created separate, identifiable peaks for each condition.
Constable’s co-researcher, Fernando Marmolejo-Ramos, says the work could be extended to other neurological conditions.
“Ultimately, we’re looking at how the eyes can help us understand the brain,” he says.
One in the eye
As well as pushing advances in retinal scanning, engineers have also developed vision-enhancing technology including a ‘bionic chip’ to overcome blindness. Prima, an eye implant developed by French company Pixium Vision, is undergoing clinical trials in patients with dry AMD including an 88-year-old British woman.
“This device offers the hope of restoring sight to people suffering vision loss due to dry AMD,” says Mahi Muqit, consultant vitreoretinal surgeon at Moorfields Eye Hospital. “The success of this operation – and evidence gathered through this clinical study – will provide evidence to determine the true potential of this treatment.”
A 2mm-wide microchip is inserted underneath a surgically created ‘trapdoor’ in the patient’s retina. The patient wears special glasses that capture video images, which are sent to a computer on their waistband. After AI processing, the image is projected as an infrared beam through the eye to the chip, which converts it into an electrical signal. The signal passes through the retinal cells and optical cells into the brain, where it is interpreted as if it were natural vision.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.