MetaAugment: Sample-Aware Data Augmentation Policy Learning
- URL: http://arxiv.org/abs/2012.12076v1
- Date: Tue, 22 Dec 2020 15:19:27 GMT
- Title: MetaAugment: Sample-Aware Data Augmentation Policy Learning
- Authors: Fengwei Zhou, Jiawei Li, Chuanlong Xie, Fei Chen, Lanqing Hong, Rui
Sun, Zhenguo Li
- Abstract summary: We learn a sample-aware data augmentation policy efficiently by formulating it as a sample reweighting problem.
An augmentation policy network takes a transformation and the corresponding augmented image as inputs, and outputs a weight to adjust the augmented image loss computed by a task network.
At training stage, the task network minimizes the weighted losses of augmented training images, while the policy network minimizes the loss of the task network on a validation set via meta-learning.
- Score: 20.988767360529362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated data augmentation has shown superior performance in image
recognition. Existing works search for dataset-level augmentation policies
without considering individual sample variations, which are likely to be
sub-optimal. On the other hand, learning different policies for different
samples naively could greatly increase the computing cost. In this paper, we
learn a sample-aware data augmentation policy efficiently by formulating it as
a sample reweighting problem. Specifically, an augmentation policy network
takes a transformation and the corresponding augmented image as inputs, and
outputs a weight to adjust the augmented image loss computed by a task network.
At training stage, the task network minimizes the weighted losses of augmented
training images, while the policy network minimizes the loss of the task
network on a validation set via meta-learning. We theoretically prove the
convergence of the training procedure and further derive the exact convergence
rate. Superior performance is achieved on widely-used benchmarks including
CIFAR-10/100, Omniglot, and ImageNet.
Related papers
- When to Learn What: Model-Adaptive Data Augmentation Curriculum [32.99634881669643]
We propose Model Adaptive Data Augmentation (MADAug) to jointly train an augmentation policy network to teach the model when to learn what.
Unlike previous work, MADAug selects augmentation operators for each input image by a model-adaptive policy varying between training stages, producing a data augmentation curriculum optimized for better generalization.
arXiv Detail & Related papers (2023-09-09T10:35:27Z) - Soft Augmentation for Image Classification [68.71067594724663]
We propose generalizing augmentation with invariant transforms to soft augmentation.
We show that soft targets allow for more aggressive data augmentation.
We also show that soft augmentations generalize to self-supervised classification tasks.
arXiv Detail & Related papers (2022-11-09T01:04:06Z) - Don't Touch What Matters: Task-Aware Lipschitz Data Augmentation for
Visual Reinforcement Learning [27.205521177841568]
We propose Task-aware Lipschitz Data Augmentation (TLDA) for visual Reinforcement Learning (RL)
TLDA explicitly identifies the task-correlated pixels with large Lipschitz constants, and only augments the task-irrelevant pixels.
It outperforms previous state-of-the-art methods across the 3 different visual control benchmarks.
arXiv Detail & Related papers (2022-02-21T04:22:07Z) - Feature transforms for image data augmentation [74.12025519234153]
In image classification, many augmentation approaches utilize simple image manipulation algorithms.
In this work, we build ensembles on the data level by adding images generated by combining fourteen augmentation approaches.
Pretrained ResNet50 networks are finetuned on training sets that include images derived from each augmentation method.
arXiv Detail & Related papers (2022-01-24T14:12:29Z) - Contrastive Learning with Stronger Augmentations [63.42057690741711]
We propose a general framework called Contrastive Learning with Stronger Augmentations(A) to complement current contrastive learning approaches.
Here, the distribution divergence between the weakly and strongly augmented images over the representation bank is adopted to supervise the retrieval of strongly augmented queries.
Experiments showed the information from the strongly augmented images can significantly boost the performance.
arXiv Detail & Related papers (2021-04-15T18:40:04Z) - Learning Representational Invariances for Data-Efficient Action
Recognition [52.23716087656834]
We show that our data augmentation strategy leads to promising performance on the Kinetics-100, UCF-101, and HMDB-51 datasets.
We also validate our data augmentation strategy in the fully supervised setting and demonstrate improved performance.
arXiv Detail & Related papers (2021-03-30T17:59:49Z) - Does Data Augmentation Benefit from Split BatchNorms [29.134017115737507]
State-of-the-art data augmentation strongly distorts training images, leading to a disparity between examples seen during training and inference.
We propose an auxiliary BatchNorm for the potentially out-of-distribution, strongly augmented images.
We find that this method significantly improves the performance of common image classification benchmarks such as CIFAR-10, CIFAR-100, and ImageNet.
arXiv Detail & Related papers (2020-10-15T15:00:43Z) - Data Augmentation for Meta-Learning [58.47185740820304]
meta-learning algorithms sample data, query data, and tasks on each training step.
Data augmentation can be used not only to expand the number of images available per class, but also to generate entirely new classes/tasks.
Our proposed meta-specific data augmentation significantly improves the performance of meta-learners on few-shot classification benchmarks.
arXiv Detail & Related papers (2020-10-14T13:48:22Z) - Learning Test-time Augmentation for Content-based Image Retrieval [42.188013259368766]
Off-the-shelf convolutional neural network features achieve outstanding results in many image retrieval tasks.
Existing image retrieval approaches require fine-tuning or modification of pre-trained networks to adapt to variations unique to the target data.
Our method enhances the invariance of off-the-shelf features by aggregating features extracted from images augmented at test-time, with augmentations guided by a policy learned through reinforcement learning.
arXiv Detail & Related papers (2020-02-05T05:08:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.