Audio-based AI classifiers show no evidence of improved COVID-19
screening over simple symptoms checkers
- URL: http://arxiv.org/abs/2212.08570v1
- Date: Thu, 15 Dec 2022 15:44:02 GMT
- Title: Audio-based AI classifiers show no evidence of improved COVID-19
screening over simple symptoms checkers
- Authors: Harry Coppock, George Nicholson, Ivan Kiskin, Vasiliki Koutra, Kieran
Baker, Jobie Budd, Richard Payne, Emma Karoune, David Hurley, Alexander
Titcomb, Sabrina Egglestone, Ana Tendero Ca\~nadas, Lorraine Butler, Radka
Jersakova, Jonathon Mellor, Selina Patel, Tracey Thornley, Peter Diggle,
Sylvia Richardson, Josef Packham, Bj\"orn W. Schuller, Davide Pigoli, Steven
Gilmour, Stephen Roberts, Chris Holmes
- Abstract summary: We collect and analyse a dataset of audio recordings from 67,842 individuals with linked metadata.
Subjects were recruited via the UK governments National Health Service Test-and-Trace programme and the REal-time Assessment of Community Transmission survey.
In an unadjusted analysis of our dataset AI classifiers predict SARS-CoV-2 infection status with high accuracy.
However, after matching on measured confounders, such as age, gender, and self reported symptoms, our classifiers performance is much weaker.
- Score: 37.085063562292845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work has reported that AI classifiers trained on audio recordings can
accurately predict severe acute respiratory syndrome coronavirus 2 (SARSCoV2)
infection status. Here, we undertake a large scale study of audio-based deep
learning classifiers, as part of the UK governments pandemic response. We
collect and analyse a dataset of audio recordings from 67,842 individuals with
linked metadata, including reverse transcription polymerase chain reaction
(PCR) test outcomes, of whom 23,514 tested positive for SARS CoV 2. Subjects
were recruited via the UK governments National Health Service Test-and-Trace
programme and the REal-time Assessment of Community Transmission (REACT)
randomised surveillance survey. In an unadjusted analysis of our dataset AI
classifiers predict SARS-CoV-2 infection status with high accuracy (Receiver
Operating Characteristic Area Under the Curve (ROCAUC) 0.846 [0.838, 0.854])
consistent with the findings of previous studies. However, after matching on
measured confounders, such as age, gender, and self reported symptoms, our
classifiers performance is much weaker (ROC-AUC 0.619 [0.594, 0.644]). Upon
quantifying the utility of audio based classifiers in practical settings, we
find them to be outperformed by simple predictive scores based on user reported
symptoms.
Related papers
- Automatically measuring speech fluency in people with aphasia: first
achievements using read-speech data [55.84746218227712]
This study aims at assessing the relevance of a signalprocessingalgorithm, initially developed in the field of language acquisition, for the automatic measurement of speech fluency.
arXiv Detail & Related papers (2023-08-09T07:51:40Z) - Coswara: A respiratory sounds and symptoms dataset for remote screening
of SARS-CoV-2 infection [23.789227109218118]
This paper presents the Coswara dataset, a dataset containing diverse set of respiratory sounds and rich meta-data.
The respiratory sounds contained nine sound categories associated with variants of breathing, cough and speech.
The paper summarizes the data collection procedure, demographic, symptoms and audio data information.
arXiv Detail & Related papers (2023-05-22T06:09:10Z) - Learning to diagnose cirrhosis from radiological and histological labels
with joint self and weakly-supervised pretraining strategies [62.840338941861134]
We propose to leverage transfer learning from large datasets annotated by radiologists, to predict the histological score available on a small annex dataset.
We compare different pretraining methods, namely weakly-supervised and self-supervised ones, to improve the prediction of the cirrhosis.
This method outperforms the baseline classification of the METAVIR score, reaching an AUC of 0.84 and a balanced accuracy of 0.75.
arXiv Detail & Related papers (2023-02-16T17:06:23Z) - A large-scale and PCR-referenced vocal audio dataset for COVID-19 [29.40538927182366]
The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022.
Audio recordings of influenzaal coughs, exhalations, and speech were collected in the 'Speak up to help beat coronavirus' digital survey.
This dataset has additional potential uses for bioacoustics research, with 11.30% participants reporting asthma.
arXiv Detail & Related papers (2022-12-15T11:40:40Z) - Developing a multi-variate prediction model for the detection of
COVID-19 from Crowd-sourced Respiratory Voice Data [0.0]
The novelty of this work is in the development of a deep learning model for the identification of COVID-19 patients from voice recordings.
We used the Cambridge University dataset consisting of 893 audio samples, crowd-sourced from 4352 participants that used a COVID-19 Sounds app.
Based on the voice data, we developed deep learning classification models to detect positive COVID-19 cases.
arXiv Detail & Related papers (2022-09-08T11:46:37Z) - Sounds of COVID-19: exploring realistic performance of audio-based
digital testing [17.59710651224251]
In this paper, we explore the realistic performance of audio-based digital testing of COVID-19.
We collected a large crowdsourced respiratory audio dataset through a mobile app, alongside recent COVID-19 test result and symptoms intended as a ground truth.
The unbiased model takes features extracted from breathing, coughs, and voice signals as predictors and yields an AUC-ROC of 0.71 (95% CI: 0.65$-$0.77)
arXiv Detail & Related papers (2021-06-29T15:50:36Z) - Quantification of pulmonary involvement in COVID-19 pneumonia by means
of a cascade oftwo U-nets: training and assessment on multipledatasets using
different annotation criteria [83.83783947027392]
This study aims at exploiting Artificial intelligence (AI) for the identification, segmentation and quantification of COVID-19 pulmonary lesions.
We developed an automated analysis pipeline, the LungQuant system, based on a cascade of two U-nets.
The accuracy in predicting the CT-Severity Score (CT-SS) of the LungQuant system has been also evaluated.
arXiv Detail & Related papers (2021-05-06T10:21:28Z) - Uncertainty-Aware COVID-19 Detection from Imbalanced Sound Data [15.833328435820622]
We propose an ensemble framework where multiple deep learning models for sound-based COVID-19 detection are developed.
It is shown that false predictions often yield higher uncertainty.
This study paves the way for a more robust sound-based COVID-19 automated screening system.
arXiv Detail & Related papers (2021-04-05T16:54:03Z) - Detecting COVID-19 from Breathing and Coughing Sounds using Deep Neural
Networks [68.8204255655161]
We adapt an ensemble of Convolutional Neural Networks to classify if a speaker is infected with COVID-19 or not.
Ultimately, it achieves an Unweighted Average Recall (UAR) of 74.9%, or an Area Under ROC Curve (AUC) of 80.7% by ensembling neural networks.
arXiv Detail & Related papers (2020-12-29T01:14:17Z) - CovidDeep: SARS-CoV-2/COVID-19 Test Based on Wearable Medical Sensors
and Efficient Neural Networks [51.589769497681175]
The novel coronavirus (SARS-CoV-2) has led to a pandemic.
The current testing regime based on Reverse Transcription-Polymerase Chain Reaction for SARS-CoV-2 has been unable to keep up with testing demands.
We propose a framework called CovidDeep that combines efficient DNNs with commercially available WMSs for pervasive testing of the virus.
arXiv Detail & Related papers (2020-07-20T21:47:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.