Analyzing the impact of SARS-CoV-2 variants on respiratory sound signals
- URL: http://arxiv.org/abs/2206.12309v1
- Date: Fri, 24 Jun 2022 14:10:31 GMT
- Title: Analyzing the impact of SARS-CoV-2 variants on respiratory sound signals
- Authors: Debarpan Bhattacharya, Debottam Dutta, Neeraj Kumar Sharma, Srikanth
Raj Chetupalli, Pravin Mote, Sriram Ganapathy, Chandrakiran C, Sahiti Nori,
Suhail K K, Sadhana Gonuguntla, Murali Alagesan
- Abstract summary: We explore whether acoustic signals, collected from COVID-19 subjects, show computationally distinguishable acoustic patterns.
Our findings suggest that multiple sound categories, such as cough, breathing, and speech, indicate significant acoustic feature differences when comparing COVID-19 subjects with omicron and delta variants.
- Score: 23.789227109218118
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The COVID-19 outbreak resulted in multiple waves of infections that have been
associated with different SARS-CoV-2 variants. Studies have reported
differential impact of the variants on respiratory health of patients. We
explore whether acoustic signals, collected from COVID-19 subjects, show
computationally distinguishable acoustic patterns suggesting a possibility to
predict the underlying virus variant. We analyze the Coswara dataset which is
collected from three subject pools, namely, i) healthy, ii) COVID-19 subjects
recorded during the delta variant dominant period, and iii) data from COVID-19
subjects recorded during the omicron surge. Our findings suggest that multiple
sound categories, such as cough, breathing, and speech, indicate significant
acoustic feature differences when comparing COVID-19 subjects with omicron and
delta variants. The classification areas-under-the-curve are significantly
above chance for differentiating subjects infected by omicron from those
infected by delta. Using a score fusion from multiple sound categories, we
obtained an area-under-the-curve of 89% and 52.4% sensitivity at 95%
specificity. Additionally, a hierarchical three class approach was used to
classify the acoustic data into healthy and COVID-19 positive, and further
COVID-19 subjects into delta and omicron variants providing high level of
3-class classification accuracy. These results suggest new ways for designing
sound based COVID-19 diagnosis approaches.
Related papers
- COVYT: Introducing the Coronavirus YouTube and TikTok speech dataset
featuring the same speakers with and without infection [4.894353840908006]
We introduce the COVYT dataset -- a novel COVID-19 dataset collected from public sources containing more than 8 hours of speech from 65 speakers.
As compared to other existing COVID-19 sound datasets, the unique feature of the COVYT dataset is that it comprises both COVID-19 positive and negative samples from all 65 speakers.
arXiv Detail & Related papers (2022-06-20T16:26:51Z) - A k-mer Based Approach for SARS-CoV-2 Variant Identification [55.78588835407174]
We show that preserving the order of the amino acids helps the underlying classifiers to achieve better performance.
We also show the importance of the different amino acids which play a key role in identifying variants and how they coincide with those reported by the USA's Centers for Disease Control and Prevention (CDC)
arXiv Detail & Related papers (2021-08-07T15:08:15Z) - Evaluating the COVID-19 Identification ResNet (CIdeR) on the INTERSPEECH
COVID-19 from Audio Challenges [59.78485839636553]
CIdeR is an end-to-end deep learning neural network originally designed to classify whether an individual is COVID-positive or COVID-negative.
We demonstrate the potential of CIdeR at binary COVID-19 diagnosis from both the COVID-19 Cough and Speech Sub-Challenges of INTERSPEECH 2021, ComParE and DiCOVA.
arXiv Detail & Related papers (2021-07-30T10:59:08Z) - COVIDx-US -- An open-access benchmark dataset of ultrasound imaging data
for AI-driven COVID-19 analytics [116.6248556979572]
COVIDx-US is an open-access benchmark dataset of COVID-19 related ultrasound imaging data.
It consists of 93 lung ultrasound videos and 10,774 processed images of patients infected with SARS-CoV-2 pneumonia, non-SARS-CoV-2 pneumonia, as well as healthy control cases.
arXiv Detail & Related papers (2021-03-18T03:31:33Z) - An Explainable AI System for Automated COVID-19 Assessment and Lesion
Categorization from CT-scans [8.694504007704994]
COVID-19 infection caused by SARS-CoV-2 pathogen is a catastrophic pandemic outbreak all over the world.
We propose an AI-powered pipeline, based on the deep-learning paradigm, for automated COVID-19 detection and lesion categorization from CT scans.
arXiv Detail & Related papers (2021-01-28T11:47:35Z) - Detecting COVID-19 from Breathing and Coughing Sounds using Deep Neural
Networks [68.8204255655161]
We adapt an ensemble of Convolutional Neural Networks to classify if a speaker is infected with COVID-19 or not.
Ultimately, it achieves an Unweighted Average Recall (UAR) of 74.9%, or an Area Under ROC Curve (AUC) of 80.7% by ensembling neural networks.
arXiv Detail & Related papers (2020-12-29T01:14:17Z) - The voice of COVID-19: Acoustic correlates of infection [9.7390888107204]
COVID-19 is a global health crisis that has been affecting many aspects of our daily lives throughout the past year.
We compare acoustic features extracted from recordings of the vowels /i:/, /e:/, /o:/, /u:/, and /a:/ produced by 11 symptomatic COVID-19 positive and 11 COVID-19 negative German-speaking participants.
arXiv Detail & Related papers (2020-12-17T10:12:41Z) - Classification supporting COVID-19 diagnostics based on patient survey
data [82.41449972618423]
logistic regression and XGBoost classifiers, that allow for effective screening of patients for COVID-19 were generated.
The obtained classification models provided the basis for the DECODE service (decode.polsl.pl), which can serve as support in screening patients with COVID-19 disease.
This data set consists of more than 3,000 examples is based on questionnaires collected at a hospital in Poland.
arXiv Detail & Related papers (2020-11-24T17:44:01Z) - Studying the Similarity of COVID-19 Sounds based on Correlation Analysis
of MFCC [1.9659095632676098]
We illustrate the importance of speech signal processing in the extraction of the Mel-Frequency Cepstral Coefficients (MFCCs) of the COVID-19 and non-COVID-19 samples.
Our results show high similarity in MFCCs between different COVID-19 cough and breathing sounds, while MFCC of voice is more robust between COVID-19 and non-COVID-19 samples.
arXiv Detail & Related papers (2020-10-17T11:38:05Z) - Pay Attention to the cough: Early Diagnosis of COVID-19 using
Interpretable Symptoms Embeddings with Cough Sound Signal Processing [0.0]
COVID-19 (coronavirus disease pandemic caused by SARS-CoV-2) has led to a treacherous and devastating catastrophe for humanity.
Current diagnosis of COVID-19 is done by Reverse-Transcription Polymer Chain Reaction (RT-PCR) testing.
An interpretable and COVID-19 diagnosis AI framework is devised and developed based on the cough sounds features and symptoms metadata.
arXiv Detail & Related papers (2020-10-06T01:22:50Z) - CovidDeep: SARS-CoV-2/COVID-19 Test Based on Wearable Medical Sensors
and Efficient Neural Networks [51.589769497681175]
The novel coronavirus (SARS-CoV-2) has led to a pandemic.
The current testing regime based on Reverse Transcription-Polymerase Chain Reaction for SARS-CoV-2 has been unable to keep up with testing demands.
We propose a framework called CovidDeep that combines efficient DNNs with commercially available WMSs for pervasive testing of the virus.
arXiv Detail & Related papers (2020-07-20T21:47:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.