Deep AutoAugment
- URL: http://arxiv.org/abs/2203.06172v2
- Date: Tue, 15 Mar 2022 15:36:24 GMT
- Title: Deep AutoAugment
- Authors: Yu Zheng, Zhi Zhang, Shen Yan, Mi Zhang
- Abstract summary: We propose a fully automated approach for data augmentation search named Deep AutoAugment (DeepAA)
DeepAA builds a multi-layer data augmentation pipeline from scratch by stacking augmentation layers one at a time until reaching convergence.
Our experiments show that even without default augmentations, we can learn an augmentation policy that achieves strong performance with that of previous works.
- Score: 22.25911903722286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While recent automated data augmentation methods lead to state-of-the-art
results, their design spaces and the derived data augmentation strategies still
incorporate strong human priors. In this work, instead of fixing a set of
hand-picked default augmentations alongside the searched data augmentations, we
propose a fully automated approach for data augmentation search named Deep
AutoAugment (DeepAA). DeepAA progressively builds a multi-layer data
augmentation pipeline from scratch by stacking augmentation layers one at a
time until reaching convergence. For each augmentation layer, the policy is
optimized to maximize the cosine similarity between the gradients of the
original and augmented data along the direction with low variance. Our
experiments show that even without default augmentations, we can learn an
augmentation policy that achieves strong performance with that of previous
works. Extensive ablation studies show that the regularized gradient matching
is an effective search method for data augmentation policies. Our code is
available at: https://github.com/MSU-MLSys-Lab/DeepAA .
Related papers
- DualAug: Exploiting Additional Heavy Augmentation with OOD Data
Rejection [77.6648187359111]
We propose a novel data augmentation method, named textbfDualAug, to keep the augmentation in distribution as much as possible at a reasonable time and computational cost.
Experiments on supervised image classification benchmarks show that DualAug improve various automated data augmentation method.
arXiv Detail & Related papers (2023-10-12T08:55:10Z) - Dynamic Data Augmentation via MCTS for Prostate MRI Segmentation [19.780410411548935]
We present Dynamic Data Augmentation (DDAug), which is efficient and has negligible cost.
DDAug computation develops a hierarchical tree structure to represent various augmentations.
Our method outperforms the current state-of-the-art data augmentation strategies.
arXiv Detail & Related papers (2023-05-25T06:44:43Z) - Advanced Data Augmentation Approaches: A Comprehensive Survey and Future
directions [57.30984060215482]
We provide a background of data augmentation, a novel and comprehensive taxonomy of reviewed data augmentation techniques, and the strengths and weaknesses (wherever possible) of each technique.
We also provide comprehensive results of the data augmentation effect on three popular computer vision tasks, such as image classification, object detection and semantic segmentation.
arXiv Detail & Related papers (2023-01-07T11:37:32Z) - Local Magnification for Data and Feature Augmentation [53.04028225837681]
We propose an easy-to-implement and model-free data augmentation method called Local Magnification (LOMA)
LOMA generates additional training data by randomly magnifying a local area of the image.
Experiments show that our proposed LOMA, though straightforward, can be combined with standard data augmentation to significantly improve the performance on image classification and object detection.
arXiv Detail & Related papers (2022-11-15T02:51:59Z) - Data-Efficient Augmentation for Training Neural Networks [15.870155099135538]
We propose a rigorous technique to select subsets of data points that when augmented, closely capture the training dynamics of full data augmentation.
Our method achieves 6.3x speedup on CIFAR10 and 2.2x speedup on SVHN, and outperforms the baselines by up to 10% across various subset sizes.
arXiv Detail & Related papers (2022-10-15T19:32:20Z) - Smart(Sampling)Augment: Optimal and Efficient Data Augmentation for
Semantic Segmentation [68.8204255655161]
We provide the first study on semantic image segmentation and introduce two new approaches: textitSmartAugment and textitSmartSamplingAugment.
SmartAugment uses Bayesian Optimization to search over a rich space of augmentation strategies and achieves a new state-of-the-art performance in all semantic segmentation tasks we consider.
SmartSamplingAugment, a simple parameter-free approach with a fixed augmentation strategy competes in performance with the existing resource-intensive approaches and outperforms cheap state-of-the-art data augmentation methods.
arXiv Detail & Related papers (2021-10-31T13:04:45Z) - Direct Differentiable Augmentation Search [25.177623230408656]
We propose an efficient differentiable search algorithm called Direct Differentiable Augmentation Search (DDAS)
It exploits meta-learning with one-step gradient update and continuous relaxation to the expected training loss for efficient search.
Our DDAS achieves state-of-the-art performance and efficiency tradeoff while reducing the search cost dramatically.
arXiv Detail & Related papers (2021-04-09T10:02:24Z) - GABO: Graph Augmentations with Bi-level Optimization [0.0]
In this work we apply one such method, bilevel optimization, to tackle the problem of graph classification on the ogbg-molhiv dataset.
Our best performing augmentation achieved a test ROCAUC score of 77.77 % with a GIN+virtual classifier.
This framework combines a GIN layer augmentation generator with a bias transformation and outperforms the same classifier augmented using the state-of-the-art FLAG augmentation.
arXiv Detail & Related papers (2021-04-01T19:00:17Z) - Adaptive Weighting Scheme for Automatic Time-Series Data Augmentation [79.47771259100674]
We present two sample-adaptive automatic weighting schemes for data augmentation.
We validate our proposed methods on a large, noisy financial dataset and on time-series datasets from the UCR archive.
On the financial dataset, we show that the methods in combination with a trading strategy lead to improvements in annualized returns of over 50$%$, and on the time-series data we outperform state-of-the-art models on over half of the datasets, and achieve similar performance in accuracy on the others.
arXiv Detail & Related papers (2021-02-16T17:50:51Z) - Generalization in Reinforcement Learning by Soft Data Augmentation [11.752595047069505]
SOft Data Augmentation (SODA) is a method that decouples augmentation from policy learning.
We find SODA to significantly advance sample efficiency, generalization, and stability in training over state-of-the-art vision-based RL methods.
arXiv Detail & Related papers (2020-11-26T17:00:34Z) - Improving 3D Object Detection through Progressive Population Based
Augmentation [91.56261177665762]
We present the first attempt to automate the design of data augmentation policies for 3D object detection.
We introduce the Progressive Population Based Augmentation (PPBA) algorithm, which learns to optimize augmentation strategies by narrowing down the search space and adopting the best parameters discovered in previous iterations.
We find that PPBA may be up to 10x more data efficient than baseline 3D detection models without augmentation, highlighting that 3D detection models may achieve competitive accuracy with far fewer labeled examples.
arXiv Detail & Related papers (2020-04-02T05:57:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.