Exploring Self-Supervised Representation Ensembles for COVID-19 Cough
Classification
- URL: http://arxiv.org/abs/2105.07566v1
- Date: Mon, 17 May 2021 01:27:20 GMT
- Title: Exploring Self-Supervised Representation Ensembles for COVID-19 Cough
Classification
- Authors: Hao Xue and Flora D. Salim
- Abstract summary: We propose a novel self-supervised learning enabled framework for COVID-19 cough classification.
A contrastive pre-training phase is introduced to train a Transformer-based feature encoder with unlabelled data.
We show that the proposed contrastive pre-training, the random masking mechanism, and the ensemble architecture contribute to improving cough classification performance.
- Score: 5.469841541565308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The usage of smartphone-collected respiratory sound, trained with deep
learning models, for detecting and classifying COVID-19 becomes popular
recently. It removes the need for in-person testing procedures especially for
rural regions where related medical supplies, experienced workers, and
equipment are limited. However, existing sound-based diagnostic approaches are
trained in a fully supervised manner, which requires large scale well-labelled
data. It is critical to discover new methods to leverage unlabelled respiratory
data, which can be obtained more easily. In this paper, we propose a novel
self-supervised learning enabled framework for COVID-19 cough classification. A
contrastive pre-training phase is introduced to train a Transformer-based
feature encoder with unlabelled data. Specifically, we design a random masking
mechanism to learn robust representations of respiratory sounds. The
pre-trained feature encoder is then fine-tuned in the downstream phase to
perform cough classification. In addition, different ensembles with varied
random masking rates are also explored in the downstream phase. Through
extensive evaluations, we demonstrate that the proposed contrastive
pre-training, the random masking mechanism, and the ensemble architecture
contribute to improving cough classification performance.
Related papers
- Towards reliable respiratory disease diagnosis based on cough sounds and vision transformers [14.144599890583308]
We propose a novel approach to cough-based disease classification based on both self-supervised and supervised learning on a large-scale cough data set.
Experimental results demonstrate our proposed approach outperforms prior arts consistently on two benchmark datasets for COVID-19 diagnosis and a proprietary dataset for COPD/non-COPD classification with an AUROC of 92.5%.
arXiv Detail & Related papers (2024-08-28T09:40:40Z) - Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on
Respiratory Sound Classification [19.180927437627282]
We introduce a novel and effective Patch-Mix Contrastive Learning to distinguish the mixed representations in the latent space.
Our method achieves state-of-the-art performance on the ICBHI dataset, outperforming the prior leading score by an improvement of 4.08%.
arXiv Detail & Related papers (2023-05-23T13:04:07Z) - Forward-Forward Contrastive Learning [4.465144120325802]
We propose Forward Forward Contrastive Learning (FFCL) as a novel pretraining approach for medical image classification.
FFCL achieves superior performance (3.69% accuracy over ImageNet pretrained ResNet-18) over existing pretraining models in the pneumonia classification task.
arXiv Detail & Related papers (2023-05-04T15:29:06Z) - SPCXR: Self-supervised Pretraining using Chest X-rays Towards a Domain
Specific Foundation Model [4.397622801930704]
Chest X-rays (CXRs) are a widely used imaging modality for the diagnosis and prognosis of lung disease.
We propose a new self-supervised paradigm, where a general representation from CXRs is learned using a group-masked self-supervised framework.
The pre-trained model is then fine-tuned for domain-specific tasks such as covid-19, pneumonia detection, and general health screening.
arXiv Detail & Related papers (2022-11-23T13:38:16Z) - Exploring Target Representations for Masked Autoencoders [78.57196600585462]
We show that a careful choice of the target representation is unnecessary for learning good representations.
We propose a multi-stage masked distillation pipeline and use a randomly model as the teacher.
A proposed method to perform masked knowledge distillation with bootstrapped teachers (dBOT) outperforms previous self-supervised methods by nontrivial margins.
arXiv Detail & Related papers (2022-09-08T16:55:19Z) - Self-supervised Transformer for Deepfake Detection [112.81127845409002]
Deepfake techniques in real-world scenarios require stronger generalization abilities of face forgery detectors.
Inspired by transfer learning, neural networks pre-trained on other large-scale face-related tasks may provide useful features for deepfake detection.
In this paper, we propose a self-supervised transformer based audio-visual contrastive learning method.
arXiv Detail & Related papers (2022-03-02T17:44:40Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z) - CNN-MoE based framework for classification of respiratory anomalies and
lung disease detection [33.45087488971683]
This paper presents and explores a robust deep learning framework for auscultation analysis.
It aims to classify anomalies in respiratory cycles and detect disease, from respiratory sound recordings.
arXiv Detail & Related papers (2020-04-04T21:45:06Z) - Rectified Meta-Learning from Noisy Labels for Robust Image-based Plant
Disease Diagnosis [64.82680813427054]
Plant diseases serve as one of main threats to food security and crop production.
One popular approach is to transform this problem as a leaf image classification task, which can be addressed by the powerful convolutional neural networks (CNNs)
We propose a novel framework that incorporates rectified meta-learning module into common CNN paradigm to train a noise-robust deep network without using extra supervision information.
arXiv Detail & Related papers (2020-03-17T09:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.