Direct Differentiable Augmentation Search
- URL: http://arxiv.org/abs/2104.04282v1
- Date: Fri, 9 Apr 2021 10:02:24 GMT
- Title: Direct Differentiable Augmentation Search
- Authors: Aoming Liu, Zehao Huang, Zhiwu Huang, Naiyan Wang
- Abstract summary: We propose an efficient differentiable search algorithm called Direct Differentiable Augmentation Search (DDAS)
It exploits meta-learning with one-step gradient update and continuous relaxation to the expected training loss for efficient search.
Our DDAS achieves state-of-the-art performance and efficiency tradeoff while reducing the search cost dramatically.
- Score: 25.177623230408656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation has been an indispensable tool to improve the performance
of deep neural networks, however the augmentation can hardly transfer among
different tasks and datasets. Consequently, a recent trend is to adopt AutoML
technique to learn proper augmentation policy without extensive hand-crafted
tuning. In this paper, we propose an efficient differentiable search algorithm
called Direct Differentiable Augmentation Search (DDAS). It exploits
meta-learning with one-step gradient update and continuous relaxation to the
expected training loss for efficient search. Our DDAS can achieve efficient
augmentation search without relying on approximations such as Gumbel Softmax or
second order gradient approximation. To further reduce the adverse effect of
improper augmentations, we organize the search space into a two level
hierarchy, in which we first decide whether to apply augmentation, and then
determine the specific augmentation policy. On standard image classification
benchmarks, our DDAS achieves state-of-the-art performance and efficiency
tradeoff while reducing the search cost dramatically, e.g. 0.15 GPU hours for
CIFAR-10. In addition, we also use DDAS to search augmentation for object
detection task and achieve comparable performance with AutoAugment, while being
1000x faster.
Related papers
- DiffNAS: Bootstrapping Diffusion Models by Prompting for Better
Architectures [63.12993314908957]
We propose a base model search approach, denoted "DiffNAS"
We leverage GPT-4 as a supernet to expedite the search, supplemented with a search memory to enhance the results.
Rigorous experimentation corroborates that our algorithm can augment the search efficiency by 2 times under GPT-based scenarios.
arXiv Detail & Related papers (2023-10-07T09:10:28Z) - RangeAugment: Efficient Online Augmentation with Range Learning [54.61514286212455]
RangeAugment efficiently learns the range of magnitudes for individual as well as composite augmentation operations.
We show that RangeAugment achieves competitive performance to state-of-the-art automatic augmentation methods with 4-5 times fewer augmentation operations.
arXiv Detail & Related papers (2022-12-20T18:55:54Z) - Deep AutoAugment [22.25911903722286]
We propose a fully automated approach for data augmentation search named Deep AutoAugment (DeepAA)
DeepAA builds a multi-layer data augmentation pipeline from scratch by stacking augmentation layers one at a time until reaching convergence.
Our experiments show that even without default augmentations, we can learn an augmentation policy that achieves strong performance with that of previous works.
arXiv Detail & Related papers (2022-03-11T18:57:27Z) - DAAS: Differentiable Architecture and Augmentation Policy Search [107.53318939844422]
This work considers the possible coupling between neural architectures and data augmentation and proposes an effective algorithm jointly searching for them.
Our approach achieves 97.91% accuracy on CIFAR-10 and 76.6% Top-1 accuracy on ImageNet dataset, showing the outstanding performance of our search algorithm.
arXiv Detail & Related papers (2021-09-30T17:15:17Z) - Scale-aware Automatic Augmentation for Object Detection [63.087930708444695]
We propose Scale-aware AutoAug to learn data augmentation policies for object detection.
In experiments, Scale-aware AutoAug yields significant and consistent improvement on various object detectors.
arXiv Detail & Related papers (2021-03-31T17:11:14Z) - Improving Auto-Augment via Augmentation-Wise Weight Sharing [123.71986174280741]
A key component of automatic augmentation search is the evaluation process for a particular augmentation policy.
In this paper, we dive into the dynamics of augmented training of the model.
We design a powerful and efficient proxy task based on the Augmentation-Wise Weight Sharing (AWS) to form a fast yet accurate evaluation process.
arXiv Detail & Related papers (2020-09-30T15:23:12Z) - Hypernetwork-Based Augmentation [1.6752182911522517]
We propose an efficient gradient-based search algorithm, called Hypernetwork-Based Augmentation (HBA)
Our HBA uses a hypernetwork to approximate a population-based training algorithm.
Our results show that HBA is competitive to the state-of-the-art methods in terms of both search speed and accuracy.
arXiv Detail & Related papers (2020-06-11T10:36:39Z) - UniformAugment: A Search-free Probabilistic Data Augmentation Approach [0.019573380763700708]
Augmenting training datasets has been shown to improve the learning effectiveness for several computer vision tasks.
Some techniques, such as AutoAugment and Fast AutoAugment, have introduced a search phase to find a set of suitable augmentation policies.
We propose UniformAugment, an automated data augmentation approach that completely avoids a search phase.
arXiv Detail & Related papers (2020-03-31T16:32:18Z) - DADA: Differentiable Automatic Data Augmentation [58.560309490774976]
We propose Differentiable Automatic Data Augmentation (DADA) which dramatically reduces the cost.
We conduct extensive experiments on CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets.
Results show our DADA is at least one order of magnitude faster than the state-of-the-art while achieving very comparable accuracy.
arXiv Detail & Related papers (2020-03-08T13:23:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.