Enabling Data Diversity: Efficient Automatic Augmentation via
Regularized Adversarial Training
- URL: http://arxiv.org/abs/2103.16493v1
- Date: Tue, 30 Mar 2021 16:49:20 GMT
- Title: Enabling Data Diversity: Efficient Automatic Augmentation via
Regularized Adversarial Training
- Authors: Yunhe Gao, Zhiqiang Tang, Mu Zhou, Dimitris Metaxas
- Abstract summary: We propose a regularized adversarial training framework via two min-max objectives and three differentiable augmentation models.
Our approach achieves superior performance over state-of-the-art auto-augmentation methods on both tasks of 2D skin cancer classification and 3D organs-at-risk segmentation.
- Score: 9.39080195887973
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Data augmentation has proved extremely useful by increasing training data
variance to alleviate overfitting and improve deep neural networks'
generalization performance. In medical image analysis, a well-designed
augmentation policy usually requires much expert knowledge and is difficult to
generalize to multiple tasks due to the vast discrepancies among pixel
intensities, image appearances, and object shapes in different medical tasks.
To automate medical data augmentation, we propose a regularized adversarial
training framework via two min-max objectives and three differentiable
augmentation models covering affine transformation, deformation, and appearance
changes. Our method is more automatic and efficient than previous automatic
augmentation methods, which still rely on pre-defined operations with
human-specified ranges and costly bi-level optimization. Extensive experiments
demonstrated that our approach, with less training overhead, achieves superior
performance over state-of-the-art auto-augmentation methods on both tasks of 2D
skin cancer classification and 3D organs-at-risk segmentation.
Related papers
- AdaAugment: A Tuning-Free and Adaptive Approach to Enhance Data Augmentation [12.697608744311122]
AdaAugment is a tuning-free Adaptive Augmentation method.
It dynamically adjusts augmentation magnitudes for individual training samples based on real-time feedback from the target network.
It consistently outperforms other state-of-the-art DA methods in effectiveness while maintaining remarkable efficiency.
arXiv Detail & Related papers (2024-05-19T06:54:03Z) - LA3: Efficient Label-Aware AutoAugment [23.705059658590436]
We propose a novel two-stage data augmentation algorithm, named Label-Aware AutoAugment (LA3), which takes advantage of the label information.
LA3 consists of two learning stages, where in the first stage, individual augmentation methods are evaluated and ranked for each label.
In the second stage, a composite augmentation policy is constructed out of a selection of effective as well as complementary augmentations, which produces significant performance boost.
arXiv Detail & Related papers (2023-04-20T13:42:18Z) - RangeAugment: Efficient Online Augmentation with Range Learning [54.61514286212455]
RangeAugment efficiently learns the range of magnitudes for individual as well as composite augmentation operations.
We show that RangeAugment achieves competitive performance to state-of-the-art automatic augmentation methods with 4-5 times fewer augmentation operations.
arXiv Detail & Related papers (2022-12-20T18:55:54Z) - Adversarial Auto-Augment with Label Preservation: A Representation
Learning Principle Guided Approach [95.74102207187545]
We show that a prior-free autonomous data augmentation's objective can be derived from a representation learning principle.
We then propose a practical surrogate to the objective that can be efficiently optimized and integrated seamlessly into existing methods.
arXiv Detail & Related papers (2022-11-02T02:02:51Z) - Efficient and Effective Augmentation Strategy for Adversarial Training [48.735220353660324]
Adversarial training of Deep Neural Networks is known to be significantly more data-hungry than standard training.
We propose Diverse Augmentation-based Joint Adversarial Training (DAJAT) to use data augmentations effectively in adversarial training.
arXiv Detail & Related papers (2022-10-27T10:59:55Z) - Learning to Augment via Implicit Differentiation for Domain
Generalization [107.9666735637355]
Domain generalization (DG) aims to overcome the problem by leveraging multiple source domains to learn a domain-generalizable model.
In this paper, we propose a novel augmentation-based DG approach, dubbed AugLearn.
AugLearn shows effectiveness on three standard DG benchmarks, PACS, Office-Home and Digits-DG.
arXiv Detail & Related papers (2022-10-25T18:51:51Z) - Robust and Efficient Medical Imaging with Self-Supervision [80.62711706785834]
We present REMEDIS, a unified representation learning strategy to improve robustness and data-efficiency of medical imaging AI.
We study a diverse range of medical imaging tasks and simulate three realistic application scenarios using retrospective data.
arXiv Detail & Related papers (2022-05-19T17:34:18Z) - Automatic Data Augmentation for 3D Medical Image Segmentation [37.262350163905445]
It is the first time that differentiable automatic data augmentation is employed in medical image segmentation tasks.
Our numerical experiments demonstrate that the proposed approach significantly outperforms existing build-in data augmentation of state-of-the-art models.
arXiv Detail & Related papers (2020-10-07T12:51:17Z) - Automatic Data Augmentation via Deep Reinforcement Learning for
Effective Kidney Tumor Segmentation [57.78765460295249]
We develop a novel automatic learning-based data augmentation method for medical image segmentation.
In our method, we innovatively combine the data augmentation module and the subsequent segmentation module in an end-to-end training manner with a consistent loss.
We extensively evaluated our method on CT kidney tumor segmentation which validated the promising results of our method.
arXiv Detail & Related papers (2020-02-22T14:10:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.