Data Augmentation for Electrocardiograms
- URL: http://arxiv.org/abs/2204.04360v1
- Date: Sat, 9 Apr 2022 02:19:55 GMT
- Title: Data Augmentation for Electrocardiograms
- Authors: Aniruddh Raghu, Divya Shanmugam, Eugene Pomerantsev, John Guttag,
Collin M. Stultz
- Abstract summary: We study whether data augmentation methods can be used to improve performance on data-scarce ECG prediction problems.
We introduce a new method, TaskAug, which defines a flexible augmentation policy that is optimized on a per-task basis.
In experiments, we find that TaskAug is competitive with or improves on prior work, and the learned policies shed light on what transformations are most effective for different tasks.
- Score: 2.8498944632323755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network models have demonstrated impressive performance in predicting
pathologies and outcomes from the 12-lead electrocardiogram (ECG). However,
these models often need to be trained with large, labelled datasets, which are
not available for many predictive tasks of interest. In this work, we perform
an empirical study examining whether training time data augmentation methods
can be used to improve performance on such data-scarce ECG prediction problems.
We investigate how data augmentation strategies impact model performance when
detecting cardiac abnormalities from the ECG. Motivated by our finding that the
effectiveness of existing augmentation strategies is highly task-dependent, we
introduce a new method, TaskAug, which defines a flexible augmentation policy
that is optimized on a per-task basis. We outline an efficient learning
algorithm to do so that leverages recent work in nested optimization and
implicit differentiation. In experiments, considering three datasets and eight
predictive tasks, we find that TaskAug is competitive with or improves on prior
work, and the learned policies shed light on what transformations are most
effective for different tasks. We distill key insights from our experimental
evaluation, generating a set of best practices for applying data augmentation
to ECG prediction problems.
Related papers
- Computation-Efficient Semi-Supervised Learning for ECG-based Cardiovascular Diseases Detection [16.34314710823127]
We propose a computation-efficient semi-supervised learning paradigm (FastECG) for robust and computation-efficient CVDs detection using ECG.
It enables a robust adaptation of pre-trained models on downstream datasets with limited supervision and high computational efficiency.
arXiv Detail & Related papers (2024-06-20T14:45:13Z) - Boosting Few-Shot Learning with Disentangled Self-Supervised Learning and Meta-Learning for Medical Image Classification [8.975676404678374]
We present a strategy for improving the performance and generalization capabilities of models trained in low-data regimes.
The proposed method starts with a pre-training phase, where features learned in a self-supervised learning setting are disentangled to improve the robustness of the representations for downstream tasks.
We then introduce a meta-fine-tuning step, leveraging related classes between meta-training and meta-testing phases but varying the level.
arXiv Detail & Related papers (2024-03-26T09:36:20Z) - Which Augmentation Should I Use? An Empirical Investigation of Augmentations for Self-Supervised Phonocardiogram Representation Learning [5.438725298163702]
Contrastive Self-Supervised Learning (SSL) offers a potential solution to labeled data scarcity.
We propose uncovering the optimal augmentations for applying contrastive learning in 1D phonocardiogram (PCG) classification.
We demonstrate that depending on its training distribution, the effectiveness of a fully-supervised model can degrade up to 32%, while SSL models only lose up to 10% or even improve in some cases.
arXiv Detail & Related papers (2023-12-01T11:06:00Z) - Unsupervised Pre-Training Using Masked Autoencoders for ECG Analysis [4.3312979375047025]
This paper proposes an unsupervised pre-training technique based on masked autoencoder (MAE) for electrocardiogram (ECG) signals.
In addition, we propose a task-specific fine-tuning to form a complete framework for ECG analysis.
The framework is high-level, universal, and not individually adapted to specific model architectures or tasks.
arXiv Detail & Related papers (2023-10-17T11:19:51Z) - Leveraging the Power of Data Augmentation for Transformer-based Tracking [64.46371987827312]
We propose two data augmentation methods customized for tracking.
First, we optimize existing random cropping via a dynamic search radius mechanism and simulation for boundary samples.
Second, we propose a token-level feature mixing augmentation strategy, which enables the model against challenges like background interference.
arXiv Detail & Related papers (2023-09-15T09:18:54Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - Data augmentation for learning predictive models on EEG: a systematic
comparison [79.84079335042456]
deep learning for electroencephalography (EEG) classification tasks has been rapidly growing in the last years.
Deep learning for EEG classification tasks has been limited by the relatively small size of EEG datasets.
Data augmentation has been a key ingredient to obtain state-of-the-art performances across applications such as computer vision or speech.
arXiv Detail & Related papers (2022-06-29T09:18:15Z) - An Empirical Study on Distribution Shift Robustness From the Perspective
of Pre-Training and Data Augmentation [91.62129090006745]
This paper studies the distribution shift problem from the perspective of pre-training and data augmentation.
We provide the first comprehensive empirical study focusing on pre-training and data augmentation.
arXiv Detail & Related papers (2022-05-25T13:04:53Z) - Improved Fine-tuning by Leveraging Pre-training Data: Theory and
Practice [52.11183787786718]
Fine-tuning a pre-trained model on the target data is widely used in many deep learning applications.
Recent studies have empirically shown that training from scratch has the final performance that is no worse than this pre-training strategy.
We propose a novel selection strategy to select a subset from pre-training data to help improve the generalization on the target task.
arXiv Detail & Related papers (2021-11-24T06:18:32Z) - Opportunities and Challenges of Deep Learning Methods for
Electrocardiogram Data: A Systematic Review [62.490310870300746]
The electrocardiogram (ECG) is one of the most commonly used diagnostic tools in medicine and healthcare.
Deep learning methods have achieved promising results on predictive healthcare tasks using ECG signals.
This paper presents a systematic review of deep learning methods for ECG data from both modeling and application perspectives.
arXiv Detail & Related papers (2019-12-28T02:44:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.