Sounds of COVID-19: exploring realistic performance of audio-based
digital testing
- URL: http://arxiv.org/abs/2106.15523v1
- Date: Tue, 29 Jun 2021 15:50:36 GMT
- Title: Sounds of COVID-19: exploring realistic performance of audio-based
digital testing
- Authors: Jing Han and Tong Xia and Dimitris Spathis and Erika Bondareva and
Chlo\"e Brown and Jagmohan Chauhan and Ting Dang and Andreas Grammenos and
Apinan Hasthanasombat and Andres Floto and Pietro Cicuta and Cecilia Mascolo
- Abstract summary: In this paper, we explore the realistic performance of audio-based digital testing of COVID-19.
We collected a large crowdsourced respiratory audio dataset through a mobile app, alongside recent COVID-19 test result and symptoms intended as a ground truth.
The unbiased model takes features extracted from breathing, coughs, and voice signals as predictors and yields an AUC-ROC of 0.71 (95% CI: 0.65$-$0.77)
- Score: 17.59710651224251
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Researchers have been battling with the question of how we can identify
Coronavirus disease (COVID-19) cases efficiently, affordably and at scale.
Recent work has shown how audio based approaches, which collect respiratory
audio data (cough, breathing and voice) can be used for testing, however there
is a lack of exploration of how biases and methodological decisions impact
these tools' performance in practice. In this paper, we explore the realistic
performance of audio-based digital testing of COVID-19. To investigate this, we
collected a large crowdsourced respiratory audio dataset through a mobile app,
alongside recent COVID-19 test result and symptoms intended as a ground truth.
Within the collected dataset, we selected 5,240 samples from 2,478 participants
and split them into different participant-independent sets for model
development and validation. Among these, we controlled for potential
confounding factors (such as demographics and language). The unbiased model
takes features extracted from breathing, coughs, and voice signals as
predictors and yields an AUC-ROC of 0.71 (95\% CI: 0.65$-$0.77). We further
explore different unbalanced distributions to show how biases and participant
splits affect performance. Finally, we discuss how the realistic model
presented could be integrated in clinical practice to realize continuous,
ubiquitous, sustainable and affordable testing at population scale.
Related papers
- Stethoscope-guided Supervised Contrastive Learning for Cross-domain
Adaptation on Respiratory Sound Classification [1.690115983364313]
We introduce cross-domain adaptation techniques, which transfer the knowledge from a source domain to a distinct target domain.
In particular, by considering different stethoscope types as individual domains, we propose a novel stethoscope-guided supervised contrastive learning approach.
The experimental results on the ICBHI dataset demonstrate that the proposed methods are effective in reducing the domain dependency and achieving the ICBHI Score of 61.71%, which is a significant improvement of 2.16% over the baseline.
arXiv Detail & Related papers (2023-12-15T08:34:31Z) - Developing a multi-variate prediction model for the detection of
COVID-19 from Crowd-sourced Respiratory Voice Data [0.0]
The novelty of this work is in the development of a deep learning model for the identification of COVID-19 patients from voice recordings.
We used the Cambridge University dataset consisting of 893 audio samples, crowd-sourced from 4352 participants that used a COVID-19 Sounds app.
Based on the voice data, we developed deep learning classification models to detect positive COVID-19 cases.
arXiv Detail & Related papers (2022-09-08T11:46:37Z) - COVYT: Introducing the Coronavirus YouTube and TikTok speech dataset
featuring the same speakers with and without infection [4.894353840908006]
We introduce the COVYT dataset -- a novel COVID-19 dataset collected from public sources containing more than 8 hours of speech from 65 speakers.
As compared to other existing COVID-19 sound datasets, the unique feature of the COVYT dataset is that it comprises both COVID-19 positive and negative samples from all 65 speakers.
arXiv Detail & Related papers (2022-06-20T16:26:51Z) - Quantification of pulmonary involvement in COVID-19 pneumonia by means
of a cascade oftwo U-nets: training and assessment on multipledatasets using
different annotation criteria [83.83783947027392]
This study aims at exploiting Artificial intelligence (AI) for the identification, segmentation and quantification of COVID-19 pulmonary lesions.
We developed an automated analysis pipeline, the LungQuant system, based on a cascade of two U-nets.
The accuracy in predicting the CT-Severity Score (CT-SS) of the LungQuant system has been also evaluated.
arXiv Detail & Related papers (2021-05-06T10:21:28Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - Uncertainty-Aware COVID-19 Detection from Imbalanced Sound Data [15.833328435820622]
We propose an ensemble framework where multiple deep learning models for sound-based COVID-19 detection are developed.
It is shown that false predictions often yield higher uncertainty.
This study paves the way for a more robust sound-based COVID-19 automated screening system.
arXiv Detail & Related papers (2021-04-05T16:54:03Z) - Virufy: A Multi-Branch Deep Learning Network for Automated Detection of
COVID-19 [1.9899603776429056]
Researchers have successfully presented models for detecting COVID-19 infection status using audio samples recorded in clinical settings.
We propose a multi-branch deep learning network that is trained and tested on crowdsourced data where most of the data has not been manually processed and cleaned.
arXiv Detail & Related papers (2021-03-02T15:31:09Z) - End-2-End COVID-19 Detection from Breath & Cough Audio [68.41471917650571]
We demonstrate the first attempt to diagnose COVID-19 using end-to-end deep learning from a crowd-sourced dataset of audio samples.
We introduce a novel modelling strategy using a custom deep neural network to diagnose COVID-19 from a joint breath and cough representation.
arXiv Detail & Related papers (2021-01-07T01:13:00Z) - Detecting COVID-19 from Breathing and Coughing Sounds using Deep Neural
Networks [68.8204255655161]
We adapt an ensemble of Convolutional Neural Networks to classify if a speaker is infected with COVID-19 or not.
Ultimately, it achieves an Unweighted Average Recall (UAR) of 74.9%, or an Area Under ROC Curve (AUC) of 80.7% by ensembling neural networks.
arXiv Detail & Related papers (2020-12-29T01:14:17Z) - Chest x-ray automated triage: a semiologic approach designed for
clinical implementation, exploiting different types of labels through a
combination of four Deep Learning architectures [83.48996461770017]
This work presents a Deep Learning method based on the late fusion of different convolutional architectures.
We built four training datasets combining images from public chest x-ray datasets and our institutional archive.
We trained four different Deep Learning architectures and combined their outputs with a late fusion strategy, obtaining a unified tool.
arXiv Detail & Related papers (2020-12-23T14:38:35Z) - CovidDeep: SARS-CoV-2/COVID-19 Test Based on Wearable Medical Sensors
and Efficient Neural Networks [51.589769497681175]
The novel coronavirus (SARS-CoV-2) has led to a pandemic.
The current testing regime based on Reverse Transcription-Polymerase Chain Reaction for SARS-CoV-2 has been unable to keep up with testing demands.
We propose a framework called CovidDeep that combines efficient DNNs with commercially available WMSs for pervasive testing of the virus.
arXiv Detail & Related papers (2020-07-20T21:47:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.