EDAC: Efficient Deployment of Audio Classification Models For COVID-19
Detection
- URL: http://arxiv.org/abs/2309.05357v1
- Date: Mon, 11 Sep 2023 10:07:51 GMT
- Title: EDAC: Efficient Deployment of Audio Classification Models For COVID-19
Detection
- Authors: Andrej Jovanovi\'c, Mario Mihaly, Lennon Donaldson
- Abstract summary: The global spread of COVID-19 had severe consequences for public health and the world economy.
Various researchers made use of machine learning methods in an attempt to detect COVID-19.
The solutions leverage various input features, such as CT scans or cough audio signals, with state-of-the-art results arising from deep neural network architectures.
To address this, we first recreated two models that use cough audio recordings to detect COVID-19.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The global spread of COVID-19 had severe consequences for public health and
the world economy. The quick onset of the pandemic highlighted the potential
benefits of cheap and deployable pre-screening methods to monitor the
prevalence of the disease in a population. Various researchers made use of
machine learning methods in an attempt to detect COVID-19. The solutions
leverage various input features, such as CT scans or cough audio signals, with
state-of-the-art results arising from deep neural network architectures.
However, larger models require more compute; a pertinent consideration when
deploying to the edge. To address this, we first recreated two models that use
cough audio recordings to detect COVID-19. Through applying network pruning and
quantisation, we were able to compress these two architectures without reducing
the model's predictive performance. Specifically, we were able to achieve an
105.76x and an 19.34x reduction in the compressed model file size with
corresponding 1.37x and 1.71x reductions in the inference times of the two
models.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Developing a multi-variate prediction model for the detection of
COVID-19 from Crowd-sourced Respiratory Voice Data [0.0]
The novelty of this work is in the development of a deep learning model for the identification of COVID-19 patients from voice recordings.
We used the Cambridge University dataset consisting of 893 audio samples, crowd-sourced from 4352 participants that used a COVID-19 Sounds app.
Based on the voice data, we developed deep learning classification models to detect positive COVID-19 cases.
arXiv Detail & Related papers (2022-09-08T11:46:37Z) - D2A U-Net: Automatic Segmentation of COVID-19 Lesions from CT Slices
with Dilated Convolution and Dual Attention Mechanism [9.84838467721235]
We propose a dilated dual attention U-Net (D2A U-Net) for COVID-19 lesion segmentation in CT slices based on dilated convolution and a novel dual attention mechanism.
Our experiment results have shown that by introducing dilated convolution and dual attention mechanism, the number of false positives is significantly reduced.
arXiv Detail & Related papers (2021-02-10T01:21:59Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - End-2-End COVID-19 Detection from Breath & Cough Audio [68.41471917650571]
We demonstrate the first attempt to diagnose COVID-19 using end-to-end deep learning from a crowd-sourced dataset of audio samples.
We introduce a novel modelling strategy using a custom deep neural network to diagnose COVID-19 from a joint breath and cough representation.
arXiv Detail & Related papers (2021-01-07T01:13:00Z) - Detecting COVID-19 from Breathing and Coughing Sounds using Deep Neural
Networks [68.8204255655161]
We adapt an ensemble of Convolutional Neural Networks to classify if a speaker is infected with COVID-19 or not.
Ultimately, it achieves an Unweighted Average Recall (UAR) of 74.9%, or an Area Under ROC Curve (AUC) of 80.7% by ensembling neural networks.
arXiv Detail & Related papers (2020-12-29T01:14:17Z) - COVID-19 Classification Using Staked Ensembles: A Comprehensive Analysis [0.0]
COVID-19, increasing with a massive mortality rate, led to the WHO declaring it as a pandemic.
It is crucial to perform efficient and fast diagnosis.
The reverse transcript polymerase chain reaction (RTPCR) test is conducted to detect the presence of SARS-CoV-2.
Instead chest CT (or Chest X-ray) can be used for a fast and accurate diagnosis.
arXiv Detail & Related papers (2020-10-07T07:43:57Z) - Predicting COVID-19 Pneumonia Severity on Chest X-ray with Deep Learning [57.00601760750389]
We present a severity score prediction model for COVID-19 pneumonia for frontal chest X-ray images.
Such a tool can gauge severity of COVID-19 lung infections that can be used for escalation or de-escalation of care.
arXiv Detail & Related papers (2020-05-24T23:13:16Z) - Intra-model Variability in COVID-19 Classification Using Chest X-ray
Images [0.0]
We quantify baseline performance metrics and variability for COVID-19 detection in chest x-ray for 12 common deep learning architectures.
Best performing models achieve a false negative rate of 3 out of 20 for detecting COVID-19 in a hold-out set.
arXiv Detail & Related papers (2020-04-30T21:20:32Z) - REST: Robust and Efficient Neural Networks for Sleep Monitoring in the
Wild [62.36144064259933]
We propose REST, a new method that simultaneously tackles both issues via adversarial training and controlling the Lipschitz constant of the neural network.
We demonstrate that REST produces highly-robust and efficient models that substantially outperform the original full-sized models in the presence of noise.
By deploying these models to an Android application on a smartphone, we quantitatively observe that REST allows models to achieve up to 17x energy reduction and 9x faster inference.
arXiv Detail & Related papers (2020-01-29T17:23:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.