Mobile app detects Covid-19 from the sound of someone's voice
Maastricht University scientists have developed a mobile phone application that can detect Covid-19 infections "more accurately than lateral flow tests".
Artificial intelligence (AI) can be used to detect Covid-19 infection in people’s voices by means of a mobile phone app, according to research to be presented on Monday at the European Respiratory Society International Congress in Barcelona, Spain.
One of the main symptoms of Covid-19 is inflammation in the upper respiratory tract and vocal cords, which usually leads to changes in the patient's voice. Therefore, University of Maastricht scientists decided to investigate whether these symptoms could be used as an accurate method for diagnosing the disease, particularly in low-income countries where PCR tests are expensive or difficult to distribute.
With an 89 per cent accuracy rate, the AI model was able to make Covid-19 diagnostics more accurately than rapid antigen tests, according to Wafaa Aljbawi, a researcher at the Institute of Data Science, Maastricht University.
“These promising results suggest that simple voice recordings and fine-tuned AI algorithms can potentially achieve high precision in determining which patients have Covid-19 infection,” she said. “Such tests can be provided at no cost and are simple to interpret. Moreover, they enable remote, virtual testing and have a turnaround time of less than a minute. They could be used, for example, at the entry points for large gatherings, enabling rapid screening of the population.”
In order to train the algorithm, Aljbawi and her team used data from the University of Cambridge’s crowd-sourcing Covid-19 Sounds App, which contains 893 audio samples from 4,352 healthy and non-healthy participants, 308 of whom had tested positive for Covid-19.
The app they developed can be installed on the user’s mobile phone. To make use of it, the participants report some basic information about demographics, medical history and smoking status, and then are asked to record some respiratory sounds. These include coughing three times, breathing deeply through their mouth three to five times, and reading a short sentence on the screen three times.
The researchers used a voice analysis technique called Mel-spectrogram analysis, which identifies different voice features such as loudness, power and variation over time, to do the diagnosis.
“In order to distinguish the voice of Covid-19 patients from those who did not have the disease, we built different artificial intelligence models and evaluated which one worked best at classifying the Covid-19 cases,” said Aljbawi.
The model that best performed was one known as Long-Short Term Memory (LSTM). LSTM is based on neural networks, which mimic the way the human brain operates and recognises the underlying relationships in data. It works with sequences, which makes it suitable for modelling signals collected over time, such as from the voice, because of its ability to store data in its memory.
The app's overall accuracy was 89 per cent, the same as its ability to correctly detect positive cases (the true positive rate or 'sensitivity'). Its ability to correctly identify negative cases (the true negative rate or 'specificity') was 83 per cent.
In contrast, lateral flow tests have a sensitivity of 56 per cent, but a higher specificity rate of 99.5 per cent. This would mean that the tests misclassify infected people as Covid-19 negative more often than the scientists' AI.
“These results show a significant improvement in the accuracy of diagnosing Covid-19 compared to state-of-the-art tests such as the lateral flow test,” said Aljbawi. “In other words, with the AI LSTM model, we could miss 11 out 100 cases who would go on to spread the infection, while the lateral flow test would miss 44 out of 100 cases."
The researchers stressed that their results need to be validated with large numbers. For this purpose, they have collected 53,449 audio samples from 36,116 participants, which they plan to use to improve and validate the accuracy of the model. They are also carrying out further analysis to understand which parameters in the voice are influencing the AI model.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.