Which Augmentation Should I Use? An Empirical Investigation of Augmentations for Self-Supervised Phonocardiogram Representation Learning
- URL: http://arxiv.org/abs/2312.00502v3
- Date: Fri, 5 Apr 2024 11:19:12 GMT
- Title: Which Augmentation Should I Use? An Empirical Investigation of Augmentations for Self-Supervised Phonocardiogram Representation Learning
- Authors: Aristotelis Ballas, Vasileios Papapanagiotou, Christos Diou,
- Abstract summary: Contrastive Self-Supervised Learning (SSL) offers a potential solution to labeled data scarcity.
We propose uncovering the optimal augmentations for applying contrastive learning in 1D phonocardiogram (PCG) classification.
We demonstrate that depending on its training distribution, the effectiveness of a fully-supervised model can degrade up to 32%, while SSL models only lose up to 10% or even improve in some cases.
- Score: 5.438725298163702
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the recent increase in research activity, deep-learning models have not yet been widely accepted in several real-world settings, such as medicine. The shortage of high-quality annotated data often hinders the development of robust and generalizable models, which do not suffer from degraded effectiveness when presented with out-of-distribution (OOD) datasets. Contrastive Self-Supervised Learning (SSL) offers a potential solution to labeled data scarcity, as it takes advantage of unlabeled data to increase model effectiveness and robustness. However, the selection of appropriate transformations during the learning process is not a trivial task and even breaks down the ability of the network to extract meaningful information. In this research, we propose uncovering the optimal augmentations for applying contrastive learning in 1D phonocardiogram (PCG) classification. We perform an extensive comparative evaluation of a wide range of audio-based augmentations, evaluate models on multiple datasets across downstream tasks, and report on the impact of each augmentation. We demonstrate that depending on its training distribution, the effectiveness of a fully-supervised model can degrade up to 32%, while SSL models only lose up to 10% or even improve in some cases. We argue and experimentally demonstrate that, contrastive SSL pretraining can assist in providing robust classifiers which can generalize to unseen, OOD data, without relying on time- and labor-intensive annotation processes by medical experts. Furthermore, the proposed evaluation protocol sheds light on the most promising and appropriate augmentations for robust PCG signal processing, by calculating their effect size on model training. Finally, we provide researchers and practitioners with a roadmap towards producing robust models for PCG classification, in addition to an open-source codebase for developing novel approaches.
Related papers
- Reducing self-supervised learning complexity improves weakly-supervised
classification performance in computational pathology [0.0]
Self-supervised learning (SSL) methods allow for large-scale analyses on non-annotated data.
We investigated the complexity of SSL in relation to classification performance with the utilization of consumer-grade hardware.
Our experiments demonstrate that we can improve downstream classification performance whilst reducing SSL training duration by 90%.
arXiv Detail & Related papers (2024-03-07T14:56:06Z) - Enhancing and Adapting in the Clinic: Source-free Unsupervised Domain
Adaptation for Medical Image Enhancement [34.11633495477596]
We propose an algorithm for source-free unsupervised domain adaptive medical image enhancement (SAME)
A structure-preserving enhancement network is first constructed to learn a robust source model from synthesized training data.
A pseudo-label picker is developed to boost the knowledge distillation of enhancement tasks.
arXiv Detail & Related papers (2023-12-03T10:01:59Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Are Sample-Efficient NLP Models More Robust? [90.54786862811183]
We investigate the relationship between sample efficiency (amount of data needed to reach a given ID accuracy) and robustness (how models fare on OOD evaluation)
We find that higher sample efficiency is only correlated with better average OOD robustness on some modeling interventions and tasks, but not others.
These results suggest that general-purpose methods for improving sample efficiency are unlikely to yield universal OOD robustness improvements, since such improvements are highly dataset- and task-dependent.
arXiv Detail & Related papers (2022-10-12T17:54:59Z) - Core-set Selection Using Metrics-based Explanations (CSUME) for
multiclass ECG [2.0520503083305073]
We show how a selection of good quality data improves deep learning model performance.
Our experimental results show a 9.67% and 8.69% precision and recall improvement with a significant training data volume reduction of 50%.
arXiv Detail & Related papers (2022-05-28T19:36:28Z) - Improving the efficacy of Deep Learning models for Heart Beat detection
on heterogeneous datasets [0.0]
We investigate the issues related to applying a Deep Learning model on heterogeneous datasets.
We show that the performance of a model trained on data from healthy subjects decreases when applied to patients with cardiac conditions.
We then evaluate the use of Transfer Learning to adapt the model to the different datasets.
arXiv Detail & Related papers (2021-10-26T14:26:55Z) - Transductive image segmentation: Self-training and effect of uncertainty
estimation [16.609998086075127]
Semi-supervised learning (SSL) uses unlabeled data during training to learn better models.
This study focuses on the quality of predictions made on the unlabeled data of interest when they are included for optimization during training, rather than improving generalization.
Our experiments on a large MRI database for multi-class segmentation of traumatic brain lesions shows promising results when comparing transductive with inductive predictions.
arXiv Detail & Related papers (2021-07-19T15:26:07Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z) - Generative Data Augmentation for Commonsense Reasoning [75.26876609249197]
G-DAUGC is a novel generative data augmentation method that aims to achieve more accurate and robust learning in the low-resource setting.
G-DAUGC consistently outperforms existing data augmentation methods based on back-translation.
Our analysis demonstrates that G-DAUGC produces a diverse set of fluent training examples, and that its selection and training approaches are important for performance.
arXiv Detail & Related papers (2020-04-24T06:12:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.