Smart speakers could be trained to detect cardiac arrests
Image credit: Dreamstime
Virtual assistants like Amazon’s Alexa and Google Home can be used to detect when a user is having a heart attack, according to University of Washington researchers.
The devices are trained to detect the gasping sound of agonal breathing, an abnormal pattern of breathing that is typically caused by serious health issues, and then call for help.
On average, the proof-of-concept tool, which was developed using real agonal breathing instances captured from 911 calls, detected agonal breathing events 97 per cent of the time from up to 6m away.
In the UK there are over 30,000 cardiac arrests a year outside of hospital – in homes and communities – where the emergency medical services attempt resuscitation.
However, less than one in ten victims of cardiac arrest survive to be discharged from hospital; early intervention can improve these chances.
“A lot of people have smart speakers in their homes, and these devices have amazing capabilities that we can take advantage of,” said Shyam Gollakota, who was on the research team.
“We envision a contactless system that works by continuously and passively monitoring the bedroom for an agonal breathing event, and alerts anyone nearby to come provide CPR. And then if there’s no response, the device can automatically call 911.”
Agonal breathing is present for about 50 per cent of people who experience cardiac arrests, according to 911 call data, and patients who take agonal breaths often have a better chance of surviving.
“This kind of breathing happens when a patient experiences really low oxygen levels,” said Dr Jacob Sunshine, who also worked on the project. “It’s sort of a guttural gasping noise, and its uniqueness makes it a good audio biomarker to use to identify if someone is experiencing a cardiac arrest.”
The researchers gathered sounds of agonal breathing from real 911 calls to Seattle’s Emergency Medical Services.
Because cardiac arrest patients are often unconscious, bystanders recorded the agonal breathing sounds by putting their phones up to the patient’s mouth so that the dispatcher could determine whether the patient needed immediate CPR.
The team collected 162 calls between 2009 and 2017 and extracted 2.5 seconds of audio at the start of each agonal breath to come up with a total of 236 clips.
The team captured the recordings on different smart devices – an Amazon Alexa, an iPhone 5s and a Samsung Galaxy S4 – and used various machine-learning techniques to boost the dataset to 7,316 positive clips.
“We played these examples at different distances to simulate what it would sound like if the patient was at different places in the bedroom,” said researcher Justin Chan.
“We also added different interfering sounds such as sounds of cats and dogs, cars honking, air conditioning, things that you might normally hear in a home.”
For the negative dataset, the team used 83 hours of audio data collected during sleep studies, yielding 7,305 sound samples. These clips contained typical sounds that people make in their sleep, such as snoring or obstructive sleep apnea.
From these datasets, the team used machine learning to create a tool that could detect agonal breathing 97 per cent of the time when the smart device was placed up to 6m away from a speaker generating the sounds.
The team envisions this algorithm could function like an app, or a skill for Alexa that runs passively on a smart speaker or smartphone while people sleep.
“This could run locally on the processors contained in the Alexa. It’s running in real time, so you don’t need to store anything or send anything to the cloud,” Gollakota said.
“Right now, this is a good proof of concept using the 911 calls in the Seattle metropolitan area,” he said. “But we need to get access to more 911 calls related to cardiac arrest so that we can improve the accuracy of the algorithm further and ensure that it generalises across a larger population.”
Sign up to the E&T News e-mail to get great stories like this delivered direct to your inbox every day.