Fruit-CoV: An Efficient Vision-based Framework for Speedy Detection and
Diagnosis of SARS-CoV-2 Infections Through Recorded Cough Sounds
- URL: http://arxiv.org/abs/2109.03219v1
- Date: Mon, 6 Sep 2021 07:56:02 GMT
- Title: Fruit-CoV: An Efficient Vision-based Framework for Speedy Detection and
Diagnosis of SARS-CoV-2 Infections Through Recorded Cough Sounds
- Authors: Long H. Nguyen, Nhat Truong Pham, Van Huong Do, Liu Tai Nguyen, Thanh
Tin Nguyen, Van Dung Do, Hai Nguyen, Ngoc Duy Nguyen
- Abstract summary: SARS-CoV-2 has spread across the world, taking part in the global pandemic disease since March 2020.
It is vital to possess a self-testing service of SARS-CoV-2 at home.
In this study, we introduce Fruit-CoV, a two-stage vision framework, which is capable of detecting SARS-CoV-2 infections through recorded cough sounds.
- Score: 0.38321248253111767
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: SARS-CoV-2 is colloquially known as COVID-19 that had an initial outbreak in
December 2019. The deadly virus has spread across the world, taking part in the
global pandemic disease since March 2020. In addition, a recent variant of
SARS-CoV-2 named Delta is intractably contagious and responsible for more than
four million deaths over the world. Therefore, it is vital to possess a
self-testing service of SARS-CoV-2 at home. In this study, we introduce
Fruit-CoV, a two-stage vision framework, which is capable of detecting
SARS-CoV-2 infections through recorded cough sounds. Specifically, we convert
sounds into Log-Mel Spectrograms and use the EfficientNet-V2 network to extract
its visual features in the first stage. In the second stage, we use 14
convolutional layers extracted from the large-scale Pretrained Audio Neural
Networks for audio pattern recognition (PANNs) and the Wavegram-Log-Mel-CNN to
aggregate feature representations of the Log-Mel Spectrograms. Finally, we use
the combined features to train a binary classifier. In this study, we use a
dataset provided by the AICovidVN 115M Challenge, which includes a total of
7371 recorded cough sounds collected throughout Vietnam, India, and
Switzerland. Experimental results show that our proposed model achieves an AUC
score of 92.8% and ranks the 1st place on the leaderboard of the AICovidVN
Challenge. More importantly, our proposed framework can be integrated into a
call center or a VoIP system to speed up detecting SARS-CoV-2 infections
through online/recorded cough sounds.
Related papers
- A large-scale and PCR-referenced vocal audio dataset for COVID-19 [29.40538927182366]
The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022.
Audio recordings of influenzaal coughs, exhalations, and speech were collected in the 'Speak up to help beat coronavirus' digital survey.
This dataset has additional potential uses for bioacoustics research, with 11.30% participants reporting asthma.
arXiv Detail & Related papers (2022-12-15T11:40:40Z) - COVYT: Introducing the Coronavirus YouTube and TikTok speech dataset
featuring the same speakers with and without infection [4.894353840908006]
We introduce the COVYT dataset -- a novel COVID-19 dataset collected from public sources containing more than 8 hours of speech from 65 speakers.
As compared to other existing COVID-19 sound datasets, the unique feature of the COVYT dataset is that it comprises both COVID-19 positive and negative samples from all 65 speakers.
arXiv Detail & Related papers (2022-06-20T16:26:51Z) - Evaluating the COVID-19 Identification ResNet (CIdeR) on the INTERSPEECH
COVID-19 from Audio Challenges [59.78485839636553]
CIdeR is an end-to-end deep learning neural network originally designed to classify whether an individual is COVID-positive or COVID-negative.
We demonstrate the potential of CIdeR at binary COVID-19 diagnosis from both the COVID-19 Cough and Speech Sub-Challenges of INTERSPEECH 2021, ComParE and DiCOVA.
arXiv Detail & Related papers (2021-07-30T10:59:08Z) - Project Achoo: A Practical Model and Application for COVID-19 Detection
from Recordings of Breath, Voice, and Cough [55.45063681652457]
We propose a machine learning method to quickly triage COVID-19 using recordings made on consumer devices.
The approach combines signal processing methods with fine-tuned deep learning networks and provides methods for signal denoising, cough detection and classification.
We have also developed and deployed a mobile application that uses symptoms checker together with voice, breath and cough signals to detect COVID-19 infection.
arXiv Detail & Related papers (2021-07-12T08:07:56Z) - COVIDx-US -- An open-access benchmark dataset of ultrasound imaging data
for AI-driven COVID-19 analytics [116.6248556979572]
COVIDx-US is an open-access benchmark dataset of COVID-19 related ultrasound imaging data.
It consists of 93 lung ultrasound videos and 10,774 processed images of patients infected with SARS-CoV-2 pneumonia, non-SARS-CoV-2 pneumonia, as well as healthy control cases.
arXiv Detail & Related papers (2021-03-18T03:31:33Z) - Detecting COVID-19 from Breathing and Coughing Sounds using Deep Neural
Networks [68.8204255655161]
We adapt an ensemble of Convolutional Neural Networks to classify if a speaker is infected with COVID-19 or not.
Ultimately, it achieves an Unweighted Average Recall (UAR) of 74.9%, or an Area Under ROC Curve (AUC) of 80.7% by ensembling neural networks.
arXiv Detail & Related papers (2020-12-29T01:14:17Z) - Recent Advances in Computer Audition for Diagnosing COVID-19: An
Overview [5.36519190935659]
Computer audition (CA) has been demonstrated to be efficient in healthcare domains for speech-affecting disorders.
CA has been underestimated in the considered data-driven technologies for fighting the COVID-19 pandemic.
arXiv Detail & Related papers (2020-12-08T21:39:01Z) - CovidDeep: SARS-CoV-2/COVID-19 Test Based on Wearable Medical Sensors
and Efficient Neural Networks [51.589769497681175]
The novel coronavirus (SARS-CoV-2) has led to a pandemic.
The current testing regime based on Reverse Transcription-Polymerase Chain Reaction for SARS-CoV-2 has been unable to keep up with testing demands.
We propose a framework called CovidDeep that combines efficient DNNs with commercially available WMSs for pervasive testing of the virus.
arXiv Detail & Related papers (2020-07-20T21:47:28Z) - COVID-Net S: Towards computer-aided severity assessment via training and
validation of deep neural networks for geographic extent and opacity extent
scoring of chest X-rays for SARS-CoV-2 lung disease severity [58.23203766439791]
Chest x-rays (CXRs) are often used to assess SARS-CoV-2 severity.
In this study, we assess the feasibility of computer-aided scoring of CXRs of SARS-CoV-2 lung disease severity using a deep learning system.
arXiv Detail & Related papers (2020-05-26T16:33:52Z) - COVID-19 and Computer Audition: An Overview on What Speech & Sound
Analysis Could Contribute in the SARS-CoV-2 Corona Crisis [10.436988903556108]
The world population is suffering from more than 10,000 registered COVID-19 disease epidemic induced deaths since the outbreak of the Corona virus more than three months ago now officially known as SARS-CoV-2.
We provide an overview on the potential for computer audition (CA), i.e., the usage of speech and sound analysis by artificial intelligence to help in this scenario.
We come to the conclusion that CA appears ready for implementation of (pre-)diagnosis and monitoring tools, and more generally provides rich and significant, yet so far untapped potential in the fight against COVID-19 spread.
arXiv Detail & Related papers (2020-03-24T21:17:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.