Speech-monitoring AI could help doctors keep patients safe
Image credit: Newscast/Time to Change
Machine learning systems could assist doctors in monitoring the mental health of patients through speech-based tests, US researchers have suggested.
A pair of neuroscientists were motivated to develop the AI system due to the lack of access to mental health professionals – such as psychiatrists and therapists – in the US. While approximately one in five adults in the US live with a mental illness, many are unable to reach or can afford to see a healthcare professional about their condition.
Psychiatrists primarily base their diagnoses and treatments plans on discussions with their patients; the researchers point out that this process is subjective and not entirely reliable. “Humans are not perfect. They can get distracted and sometimes miss out on subtle speech cues and warning signs. Unfortunately, there is no objective blood test for mental health,” said Professor Brita Elvevåg, a neuroscientist at the University of Tromsø.
Elvevåg teamed up with Professor Peter Foltz, from the Institute of Cognitive Science, and computing graduate student Chelsea Chandler, from the University of Colorado Boulder, to develop an AI system capable of assisting healthcare professionals in monitoring their patients’ mental condition. They produced a system which detects day-to-day changes in speech patterns which could indicate a decline in mental health.
The patient first answers a five to ten-minute series of questions by talking into their phone; they are asked about their emotional state, to tell a story, listen to and repeat a story and given a series of basic motor skills tests. The collected speech samples are assessed by a machine learning system, which compares them to previous samples from the same patient and samples from the broader population to ‘rate’ the patient’s mental state.
According to the researchers, sentences which do not follow a logical pattern are a symptom of schizophrenia; memory loss could indicate both cognitive and psychiatric complications, and shifts in tone and pace could hint at manic or depressive episodes.
“Language is a critical pathway to detecting patient mental states,” said Foltz. “Using mobile devices and AI, we are able to track patients daily and monitor these subtle changes.”
In a test for the system, the researchers asked clinicians to assess 225 participants (half of whom had severe psychiatric conditions) based on speech samples. The results were compared to the results of the system and the researchers found that – in these limited circumstances – the AI model was at least as successful as the human clinicians.
“We are not in any way trying to replace clinicians,” said Foltz, co-author of the paper, “but we do believe we can create tools that will allow them to better monitor their patients.”
Foltz envisions a future scenario in which AI systems such as these could be used as a tool by clinicians to provide extra insights, as well as serving as a remote-monitoring system for patients with the most serious conditions. If a system like this detected a worrying change – a responsibility currently undertaken by trained professionals through regular interviews with the patient – it could notify the patient’s care provider to check on them.
Foltz and his collaborators have called for further studies to “prove efficacy and earn public trust” before AI systems like this can be introduced into clinical practice to assist psychiatrists and other mental health professionals.
“The mystery around AI does not nurture trustworthiness, which is critical when applying medical technology,” the authors of the paper wrote. “Rather than looking for machine learning models to become the ultimate decision-maker in medicine, we should leverage the things that machines do well that are distinct from what humans do well.”
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.