TrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation
- URL: http://arxiv.org/abs/2103.10158v1
- Date: Thu, 18 Mar 2021 10:48:02 GMT
- Title: TrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation
- Authors: Samuel G. M\"uller, Frank Hutter
- Abstract summary: We present a most simple automatic augmentation baseline, TrivialAugment, that outperforms previous methods almost for free.
To us, TrivialAugment's effectiveness is very unexpected.
We propose best practices for sustained future progress in automatic augmentation methods.
- Score: 39.94053999359753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic augmentation methods have recently become a crucial pillar for
strong model performance in vision tasks. Current methods are mostly a
trade-off between being simple, in-expensive or well-performing. We present a
most simple automatic augmentation baseline, TrivialAugment, that outperforms
previous methods almost for free. It is parameter-free and only applies a
single augmentation to each image. To us, TrivialAugment's effectiveness is
very unexpected. Thus, we performed very thorough experiments on its
performance. First, we compare TrivialAugment to previous state-of-the-art
methods in a plethora of scenarios. Then, we perform multiple ablation studies
with different augmentation spaces, augmentation methods and setups to
understand the crucial requirements for its performance. We condensate our
learnings into recommendations to automatic augmentation users. Additionally,
we provide a simple interface to use multiple automatic augmentation methods in
any codebase, as well as, our full code base for reproducibility. Since our
work reveals a stagnation in many parts of automatic augmentation research, we
end with a short proposal of best practices for sustained future progress in
automatic augmentation methods.
Related papers
- On the test-time zero-shot generalization of vision-language models: Do we really need prompt learning? [13.803180972839213]
We introduce a robust MeanShift for Test-time Augmentation (MTA)
MTA surpasses prompt-based methods without requiring this intensive training procedure.
We extensively benchmark our method on 15 datasets and demonstrate MTA's superiority and computational efficiency.
arXiv Detail & Related papers (2024-05-03T17:34:02Z) - RangeAugment: Efficient Online Augmentation with Range Learning [54.61514286212455]
RangeAugment efficiently learns the range of magnitudes for individual as well as composite augmentation operations.
We show that RangeAugment achieves competitive performance to state-of-the-art automatic augmentation methods with 4-5 times fewer augmentation operations.
arXiv Detail & Related papers (2022-12-20T18:55:54Z) - Adversarial Auto-Augment with Label Preservation: A Representation
Learning Principle Guided Approach [95.74102207187545]
We show that a prior-free autonomous data augmentation's objective can be derived from a representation learning principle.
We then propose a practical surrogate to the objective that can be efficiently optimized and integrated seamlessly into existing methods.
arXiv Detail & Related papers (2022-11-02T02:02:51Z) - Smart(Sampling)Augment: Optimal and Efficient Data Augmentation for
Semantic Segmentation [68.8204255655161]
We provide the first study on semantic image segmentation and introduce two new approaches: textitSmartAugment and textitSmartSamplingAugment.
SmartAugment uses Bayesian Optimization to search over a rich space of augmentation strategies and achieves a new state-of-the-art performance in all semantic segmentation tasks we consider.
SmartSamplingAugment, a simple parameter-free approach with a fixed augmentation strategy competes in performance with the existing resource-intensive approaches and outperforms cheap state-of-the-art data augmentation methods.
arXiv Detail & Related papers (2021-10-31T13:04:45Z) - Augmentation Pathways Network for Visual Recognition [61.33084317147437]
This paper introduces Augmentation Pathways (AP) to stabilize training on a much wider range of augmentation policies.
AP tames heavy data augmentations and stably boosts performance without a careful selection among augmentation policies.
Experimental results on ImageNet benchmarks demonstrate the compatibility and effectiveness on a much wider range of augmentations.
arXiv Detail & Related papers (2021-07-26T06:54:53Z) - Enabling Data Diversity: Efficient Automatic Augmentation via
Regularized Adversarial Training [9.39080195887973]
We propose a regularized adversarial training framework via two min-max objectives and three differentiable augmentation models.
Our approach achieves superior performance over state-of-the-art auto-augmentation methods on both tasks of 2D skin cancer classification and 3D organs-at-risk segmentation.
arXiv Detail & Related papers (2021-03-30T16:49:20Z) - Automatic Data Augmentation for 3D Medical Image Segmentation [37.262350163905445]
It is the first time that differentiable automatic data augmentation is employed in medical image segmentation tasks.
Our numerical experiments demonstrate that the proposed approach significantly outperforms existing build-in data augmentation of state-of-the-art models.
arXiv Detail & Related papers (2020-10-07T12:51:17Z) - Improving Auto-Augment via Augmentation-Wise Weight Sharing [123.71986174280741]
A key component of automatic augmentation search is the evaluation process for a particular augmentation policy.
In this paper, we dive into the dynamics of augmented training of the model.
We design a powerful and efficient proxy task based on the Augmentation-Wise Weight Sharing (AWS) to form a fast yet accurate evaluation process.
arXiv Detail & Related papers (2020-09-30T15:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.