Can you tell if patients have COVID-19 by their voice? This is the challenge that several research teams have set themselves. Since the beginning of the pandemic, several initiatives have been taken to design artificial intelligence programs that use voice recognition to diagnose a person with or without the disease. However, despite several research projects, none of these models are fully successful.
Voice analysis to diagnose COVID-19
In order for us to speak, the voice is essential. It is the set of sounds produced by the vibrations of the vocal cords, made possible by the passage of air through them. From this definition, we understand that the quality of a person’s voice depends on their breathing. Dyspnea (shortness of breath) is one of the symptoms that a person with COVID-19 may experience.
As dyspnea develops, it can also affect the voice, especially when combined with a severe cough that strains the vocal cords. Voice deterioration may present as “hoarseness” with “jerky”, “rough” or “dry” tones or with a drop in voice pitch.
With this in mind, a team of researchers investigated the voice quality of patients with COVID-19. This acoustic analysis was the subject ofa publication written by Maral Asiaee, Amir Vahedian-azimi, Seyed Shahab Atashi, Abdalsamad Keramatfar and Mandana Nourbakhsh, five Iranian researchers.
The results showed significant differences between healthy and COVID-19 sufferers and also between male and female participants tested. This is because less air is expelled when speaking when the person is symptomatic and the movements of the larynx, as well as other respiratory muscles also change.
Artificial intelligence programs don’t work in 100% of cases
Several research teams took the lead, based on these voice analyses, to try to design AI programs to detect COVID-19 through voice recognition. Theoretically, the researchers had enough data to come up with a model that could potentially give an effective diagnosis.
A paper published by researchers Mahmoud Al Ismail, Soham Deshumkh and Rita Singh of Carnegie-Mellon University explains how they designed an algorithm called ADLES, implemented in a dynamic system that analyzes vocal cord oscillation. The samples of people used to test the program were composed of symptomatic people diagnosed with COVID-19, and others who were not infected. Unfortunately, the system was not tested in patients with COVID-19 who were asymptomatic.
The researchers themselves have pointed out that this can only be measured in symptomatic patients and that there are therefore limitations in the computer program to distinguish whether the symptoms are due to COVID-19 or another condition.
In other research by Tong Xia, Jing Han, Lorena Qendro, Ting Dang and Cecilia Mascolo of the University of Cambridge, the researchers acknowledge that their model is not perfect. Their machine learning models are able to detect some nuances in the sounds made by people with COVID-19, but the accuracy of the recognition varies and is not very precise.
Translated from La reconnaissance vocale pour diagnostiquer le COVID-19 n’est pas 100% fiable