KeepAugment: A Simple Information-Preserving Data Augmentation Approach
- URL: http://arxiv.org/abs/2011.11778v1
- Date: Mon, 23 Nov 2020 22:43:04 GMT
- Title: KeepAugment: A Simple Information-Preserving Data Augmentation Approach
- Authors: Chengyue Gong, Dilin Wang, Meng Li, Vikas Chandra, Qiang Liu
- Abstract summary: We propose a simple yet highly effective approach, dubbed emphKeepAugment, to increase augmented images fidelity.
The idea is first to use the saliency map to detect important regions on the original images and then preserve these informative regions during augmentation.
Empirically, we demonstrate our method significantly improves on a number of prior art data augmentation schemes.
- Score: 42.164438736772134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation (DA) is an essential technique for training
state-of-the-art deep learning systems. In this paper, we empirically show data
augmentation might introduce noisy augmented examples and consequently hurt the
performance on unaugmented data during inference. To alleviate this issue, we
propose a simple yet highly effective approach, dubbed \emph{KeepAugment}, to
increase augmented images fidelity. The idea is first to use the saliency map
to detect important regions on the original images and then preserve these
informative regions during augmentation. This information-preserving strategy
allows us to generate more faithful training examples. Empirically, we
demonstrate our method significantly improves on a number of prior art data
augmentation schemes, e.g. AutoAugment, Cutout, random erasing, achieving
promising results on image classification, semi-supervised image
classification, multi-view multi-camera tracking and object detection.
Related papers
- GeNIe: Generative Hard Negative Images Through Diffusion [16.619150568764262]
Recent advances in generative AI have enabled more sophisticated augmentation techniques that produce data resembling natural images.
We introduce GeNIe, a novel augmentation method which leverages a latent diffusion model conditioned on a text prompt to generate challenging augmentations.
Our experiments demonstrate the effectiveness of our novel augmentation method and its superior performance over the prior art.
arXiv Detail & Related papers (2023-12-05T07:34:30Z) - Effective Data Augmentation With Diffusion Models [65.09758931804478]
We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models.
Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples.
We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
arXiv Detail & Related papers (2023-02-07T20:42:28Z) - Local Magnification for Data and Feature Augmentation [53.04028225837681]
We propose an easy-to-implement and model-free data augmentation method called Local Magnification (LOMA)
LOMA generates additional training data by randomly magnifying a local area of the image.
Experiments show that our proposed LOMA, though straightforward, can be combined with standard data augmentation to significantly improve the performance on image classification and object detection.
arXiv Detail & Related papers (2022-11-15T02:51:59Z) - Masked Autoencoders are Robust Data Augmentors [90.34825840657774]
Regularization techniques like image augmentation are necessary for deep neural networks to generalize well.
We propose a novel perspective of augmentation to regularize the training process.
We show that utilizing such model-based nonlinear transformation as data augmentation can improve high-level recognition tasks.
arXiv Detail & Related papers (2022-06-10T02:41:48Z) - TeachAugment: Data Augmentation Optimization Using Teacher Knowledge [11.696069523681178]
We propose a data augmentation optimization method based on the adversarial strategy called TeachAugment.
We show that TeachAugment outperforms existing methods in experiments of image classification, semantic segmentation, and unsupervised representation learning tasks.
arXiv Detail & Related papers (2022-02-25T06:22:51Z) - Survey: Image Mixing and Deleting for Data Augmentation [0.0]
Image mixing and deleting is a sub-area within data augmentation.
Model trained with this approach has shown to perform and generalize well.
Due to its low compute cost and success in recent past, many techniques of image mixing and deleting are proposed.
arXiv Detail & Related papers (2021-06-13T20:32:24Z) - InAugment: Improving Classifiers via Internal Augmentation [14.281619356571724]
We present a novel augmentation operation, that exploits image internal statistics.
We show improvement over state-of-the-art augmentation techniques.
We also demonstrate an increase for ResNet50 and EfficientNet-B3 top-1's accuracy on the ImageNet dataset.
arXiv Detail & Related papers (2021-04-08T15:37:21Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - GridMask Data Augmentation [76.79300104795966]
We propose a novel data augmentation method GridMask' in this paper.
It utilizes information removal to achieve state-of-the-art results in a variety of computer vision tasks.
arXiv Detail & Related papers (2020-01-13T07:27:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.