
Can digital doctors save the NHS?
Image credit: Getty Images
AI will play an important role in healthcare, but virtual doctors are still a long way off.
Britain’s treasured National Health Service (NHS) is at breaking point and healthcare chiefs are desperate to find new technologies and processes that can help them streamline operations and improve patient care.
A study compiled by the King’s Fund, the Health Foundation and the Nuffield Trust estimates that NHS staff shortages in England reached 100,000 by the end of 2019 and could deteriorate further to hit almost 250,000 by 2030. The cost of care is also spiralling, compounded by an increasingly elderly population and hikes in the price of new drugs.
The use of artificial intelligence (AI) – computerised algorithms that can take over some of the tasks currently performed by human staff – is being widely explored within multiple scenarios.
In a survey of 5,000 people conducted by YouGov on behalf of Microsoft UK, 46 per cent of healthcare leaders reported their organisation used the technology in some capacity in 2019, 8 per cent more than in 2018. While use of robotic process automation, general automation and voice-recognition technology to improve the efficiency of back-end administrative processes among medical organisations all scored comparatively highly, the majority of use cases within actual healthcare remain restricted to experimental applications and small, localised pilot studies.
Healthcare is a knowledge industry like any other, but one that is not making the most of all the unstructured data at its disposal to digitise and streamline its processes and workflows to make it faster, more efficient and higher quality.
“Most other industries are further ahead than healthcare so it’s about taking proven technologies – not cutting-edge research projects but established impacts and benefits – and thinking about where we apply them in a healthcare setting,” says Dr Mark Davies, chief medical officer at IBM’s Watson Health Data and Analytics business in Europe. “About 80 per cent of the data is unstructured and we need technologies like AI, cloud and blockchain to make sense of it and turn it into actionable insights.”
One area is the optimisation of patient flows and procedures – using AI to process some of the repetitive background administration tasks, thus freeing up clinicians’ time to communicate directly with patients. Another application is helping doctors sift through the vast libraries of medical research and best practice guidelines to help identify the best method of treating individual patients.
“AI is quite helpful in coming up with new treatments and diagnostic methods because of the vast quantities of data that needs to be analysed to see correlations and reach conclusions,” adds hospital consultant Dr Tom Dolphin. “For humans that is quite laborious but for AI it is quite simple.”
The East Suffolk and North Essex NHS Foundation Trust (ESNEFT) has used robotic process automation (RPA) to accelerate invoice processing within its finance team, a project which is estimated to have released about 300 work hours in the first month of operation and 4,500 a month by the end its first year.
“This is the NHS, so things do not tend to move fast!” comments ESNEFT chief technology officer Darren Atkins. “It is not about doing massive transformational change, it is about automating existing processes to make them a lot faster and free up staff from the parts of their roles they enjoy the least.”
“At the moment the IT infrastructure in the NHS is really poor,” Dolphin agrees. “A BMA survey found a quarter of participants lost more than four hours a week of productivity because of inefficient hardware and software, where sometimes it took more than four minutes just to log in.”
Davies says AI can also be used to personalise and create bespoke patient treatments, which can have an enormous impact on the outcome and targeting of clinical interventions while simultaneously driving down the cost of care by sharpening its focus.
“It can empower patients to take a more active involvement in their care, using AI to bring different data sources together to provide tools and apps that enable them to get a much more accurate, predictive picture of their lifestyle and condition,” he says.
IBM Watson for Clinical Trial Matching, for example, is already helping clinicians recruit patients for medical trials by making it easier to process associated inclusion and exclusion criteria. The Mayo clinic in the US used AI to navigate large, complex data sets to increase recruitment for a breast cancer trial by over 80 per cent.
Healthcare scientists are also working alongside IT companies to build computational models of DNA sequences in order to describe interactions between cells that are too complex for humans to analyse using PCs and spreadsheets.
Station B – a partnership between Microsoft, Princeton University, gene and cell therapy specialist Oxford Biomedica and digital bioscience software developer Synthace – is a research system that enables scientists to engineer living cells using machine learning and data analysis. The aim is to build a platform hosted in Microsoft’s Azure cloud which is able to interrogate large volumes of biomedical data and advise scientists how best to proceed with research: for example, suggesting how best to edit DNA to make genes function in a particular way.
For the moment though, the idea of the ‘virtual doctor’ – a computerised physician or GP able to diagnose patients based on its understanding of how they express their symptoms using unstructured text analysis, voice recognition or natural language processing (NLP) – seems a long way from become a reality. Rather, AI will supplement rather than replace clinicians in the short to medium term.
‘There is a lot of intuition in medicine – we don’t know how we are doing it ourselves so we cannot program software to do it either.’
“I wish AI stood for augmented intelligence – it’s not about computers taking over from doctors!” Davies exclaims. “We [IBM] see this as a clinical partner that sits beside the clinician to support and prompt their decision rather than taking over.”
Dolphin adds: “The problem with things like Babylon [a digital health service] is that it is limited to text input and is only an algorithm. The software does not understand what is happening because it has no access to contextual information, like the way a patient looks or smells, or how they interact with their partners for example. There is a lot of intuition in medicine – we don’t know how we are doing it ourselves so we cannot program software to do it either.”
IBM believes there is a place for AI-powered chatbots in healthcare, but perhaps not in the way that people may envisage. And any patient-facing system must be designed in a way which makes it clear that people are dealing with a computer, or a digital entity, rather than a human doctor.
“I have an issue with the term ‘virtual doctor’... but we are certainly thinking about which bits of the clinical consultation process can be automated to take some of the legwork away and provide value before people attend surgery,” says Davies.
IBM has deployed a virtual assistant (VA) at Alder Hey Children’s Hospital in Liverpool to provide a digital experience for younger patients that allows them to ask questions about their treatment plan in their own time and without the pressure of talking to a doctor – a project which has reduced anxiety in children and their parents.
Elsewhere, Moorfields Eye Hospital in London has developed a ‘digital front door’ that uses a virtual assistant (the Oriel Assistant) to field enquiries from staff, patients and the general public and process feedback about a new hospital in Kings Cross using NLP to make information more accessible to those with reduced vision.
A national AI lab
The NHS announced plans in August 2019 to set up a National Artificial Intelligence Lab to conduct research and enhance patient care through personalised screening and treatment.
The lab comes under the auspices of NHSX, which brings together teams from government and the health service to implement a digital transformation of the health and care system.
While the NHS is conscious of the potential advantages, it is also wary of trying to run before it can walk, and has called for regulation to make sure AI is done safely in a way that protects patient privacy, particularly when it comes to data usage.
Some of these issues were discussed in an NHCX report called ‘Artificial Intelligence: How to get it right’ published in October 2019.
Matthew Gould, chief executive of NHSX, convened a round table of regulators to talk this February, concerned both that the AI used will be unsafe and that its adoption could be delayed because clinicians and healthcare organisations hold back until they have the confidence that a regulatory framework governing its use can give them.
“We haven’t worked out yet how to regulate machine learning – systems that are constantly iterating their algorithms, often at huge speed and for reasons that are not always transparent, even to their creators,” Gould wrote in a recent blog.
Dr Nicola Strickland, president of the Royal College of Radiologists, is also cautious. “I expect radiologists to be leaders in using AI algorithms to assist them, provided they can see evidence that these AI algorithms have been developed using large enough, properly curated data and rigorously validated and tested,” she says.
Talking to clinicians about using AI tends to elicit different responses depending on their field of speciality, but most appear to see both opportunities and challenges, particularly when it comes to solving current issues around staff shortages and time constraints that make it hard for healthcare professionals to engage with patients as effectively as they could.
“There is a level of enthusiasm in the radiology community which is really heartening,” says Davies. “Reading an image is a safety-critical event – if you make a mistake it has important implications for patients. In most other industries there are safety nets behind them and we think AI has the potential to ensure more consistency and better performance in the radiology process.”
Intel is working with various UK healthcare organisations to improve imaging processes, for example. One trial involves using AI to ensure the correct placement of naso-gastric (NG) tubes that carry food and medicine to a patient’s stomach through their nasal cavity. Currently, doctors and nurses use manual processes to assess whether they have fed the tube correctly into the oesophagus rather than into the windpipe by mistake, the consequences of which can be life-threatening.
Checks can be made using X-rays but it can be hard for staff inexperienced in radiology to tell the two organs apart by looking at the images obtained, due to their close proximity. The Intel trial aims to solve the problem by using AI to train a model that analyses thousands of stored chest X-ray images so the computer can tell with a current 90 per cent accuracy rate whether the NG tubes have been put in the right place.
Denmark-headquartered CorporateHealth International (CHI) is another healthcare organisation making advances with AI. The company delivers a managed colon-capsule endoscopy service in the UK that relies on a tiny video camera, no bigger than a vitamin pill, which is swallowed by the patient and takes up to 400,000 images as it travels through the digestive system to help spot symptoms of gastrointestinal diseases. The footage is then analysed by a team of nurses for signs of anything abnormal.
“For that AI can provide an extremely valuable tool, so we are training a neural network with data from previous procedures that our team has already and that neural network is now being used to help nurses highlight all images that are suspicious,” explains CHI managing director Hagen Wenzek.
Meanwhile in Wales, Cardiff University is using machine learning to enable the automated detection of lesions in positron emission tomography (PET) scans. The system will train and validate an AI model by analysing a large database of pre-contoured images contained in historical data sets to identify what existing lesions look like in order to diagnose them automatically with a high degree of accuracy.
Dolphin says it’s far too early to write off radiology as a profession though, given that doctors will always need a human to process the contextual information that helps them come to a diagnosis.
“AI is very good at specific modality like X-rays because that is what it is trained to do,” he explains. “But a radiologist doesn’t just suggest the best image for you – you have to give them context and history to get a much better diagnosis as to what image to provide.”
All agree that whatever the potential, the only way to head off any scepticism around the use of AI in healthcare is to conduct rigorous tests and evaluations on the technology developed.
“AI is coming out of the hype curve now but the way to have a mature discourse with healthcare professionals and the public around its value is by rooting it in scientific evaluation,” says Davies, with IBM having already submitted around 80 publications explaining where Watson Health can be used in the industry.
Meanwhile, Dolphin points out the importance of understanding how any specific AI model is reaching its conclusions – whether it is learning by itself and making up its own rules, and what rules it is using. He cites a case involving chest X-ray interpretations where the ML algorithm had become very good at spotting signs of pneumonia. But when users dug deeper to understand how, they found the AI model was focusing on the presence of a marker in the corner of the image which indicated that the patient was having their X-ray in the ward rather than coming down to the X-ray department, and therefore was more likely to be ill.
The medical profession will need to make sure that AI is subjected to the same appraisals and testing as other technologies used in healthcare, such as pharmaceuticals, which are subject to strict assessments and transparent trials for everybody to see.
“All these things are unregulated and need to be properly tested,” says Dolphin. “Babylon is being championed by [Health Secretary] Matt Hancock, but we need to ask what tests have been done and where can we see it is safe. People won’t start using these AI models until they have demonstrated they are safe and the results have been published in a reputed medical journal.”
That is exactly what is happening now at Papworth Hospital in Cambridgeshire (see box) where 80 patients with cystic fibrosis (CF) have been recruited and have agreed to share their data for analysis, with plans to expand the trial to other national and international CF centres by 2021. The objective for the first year of Project Breathe is to confirm that home monitoring and virtual clinics in an adult CF population are safe and to verify the cost savings of the approach.
Doctors are always looking for new things to make patients’ health better and streamline the processes and treatment they employ so as to be more beneficial to those patients, but AI still has a way to go to radically transform healthcare.
The ESNEFT’s Atkins knows that the hype and opportunity of AI must be balanced against a sense of realism, but has no doubt that the technology can help medical professionals improve and verify the decisions they are making. As he says, “we know AI fits somewhere in the future, but I do not think we can predict right now exactly where it is going to take us”.
Project Breathe
When combined with the Internet of Things (IoT), AI can also improve patient care by collecting and processing medical data submitted by remote healthcare devices to avoid unnecessary hospital visits.
The Royal Papworth Hospital is trialling such a system for people with cystic fibrosis, as part of a consortium also involving the Cystic Fibrosis Trust, the University of Cambridge, Microsoft and social enterprise Magic Bullet.
Currently a hospital visit is the only way for a clinical team to monitor the health of cystic fibrosis sufferers, but this can put undue pressure on both patients and medical staff. “Patients can come in for a routine check-up, but their results are so bad that we end up having to admit them to hospital as an inpatient,” explains Samantha Henman, lead cystic fibrosis nurse at Royal Papworth Hospital. “That has a huge knock-on effect on their daily lives; they can’t go back home; they have to take unplanned time off work and their independence is suddenly gone. That creates a situation where CF patients get incredibly nervous and anxious before an appointment.”
Project Breathe aims to evaluate whether equipment and monitoring software can change the way that care for CF patients is managed. Suitable candidates are given lung function monitors, saturation probes (which measure oxygen in the blood), scales, a thermometer and a smartwatch that measures steps, resting heart rate and sleep. The data from these sensors is downloaded onto their phones and sent to a patient dashboard to create a picture of their health over time, which can be analysed by machine learning experts to develop algorithms able to detect the start of clinical deterioration.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.