InAugment: Improving Classifiers via Internal Augmentation
- URL: http://arxiv.org/abs/2104.03843v1
- Date: Thu, 8 Apr 2021 15:37:21 GMT
- Title: InAugment: Improving Classifiers via Internal Augmentation
- Authors: Moab Arar, Ariel Shamir, Amit Bermano
- Abstract summary: We present a novel augmentation operation, that exploits image internal statistics.
We show improvement over state-of-the-art augmentation techniques.
We also demonstrate an increase for ResNet50 and EfficientNet-B3 top-1's accuracy on the ImageNet dataset.
- Score: 14.281619356571724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image augmentation techniques apply transformation functions such as
rotation, shearing, or color distortion on an input image. These augmentations
were proven useful in improving neural networks' generalization ability. In
this paper, we present a novel augmentation operation, InAugment, that exploits
image internal statistics. The key idea is to copy patches from the image
itself, apply augmentation operations on them, and paste them back at random
positions on the same image. This method is simple and easy to implement and
can be incorporated with existing augmentation techniques. We test InAugment on
two popular datasets -- CIFAR and ImageNet. We show improvement over
state-of-the-art augmentation techniques. Incorporating InAugment with Auto
Augment yields a significant improvement over other augmentation techniques
(e.g., +1% improvement over multiple architectures trained on the CIFAR
dataset). We also demonstrate an increase for ResNet50 and EfficientNet-B3
top-1's accuracy on the ImageNet dataset compared to prior augmentation
methods. Finally, our experiments suggest that training convolutional neural
network using InAugment not only improves the model's accuracy and confidence
but its performance on out-of-distribution images.
Related papers
- Image edge enhancement for effective image classification [7.470763273994321]
We propose an edge enhancement-based method to enhance both accuracy and training speed of neural networks.
Our approach involves extracting high frequency features, such as edges, from images within the available dataset and fusing them with the original images.
arXiv Detail & Related papers (2024-01-13T10:01:34Z) - DualAug: Exploiting Additional Heavy Augmentation with OOD Data
Rejection [77.6648187359111]
We propose a novel data augmentation method, named textbfDualAug, to keep the augmentation in distribution as much as possible at a reasonable time and computational cost.
Experiments on supervised image classification benchmarks show that DualAug improve various automated data augmentation method.
arXiv Detail & Related papers (2023-10-12T08:55:10Z) - Soft Augmentation for Image Classification [68.71067594724663]
We propose generalizing augmentation with invariant transforms to soft augmentation.
We show that soft targets allow for more aggressive data augmentation.
We also show that soft augmentations generalize to self-supervised classification tasks.
arXiv Detail & Related papers (2022-11-09T01:04:06Z) - SAGE: Saliency-Guided Mixup with Optimal Rearrangements [22.112463794733188]
Saliency-Guided Mixup with Optimal Rearrangements (SAGE)
SAGE creates new training examples by rearranging and mixing image pairs using visual saliency as guidance.
We demonstrate on CIFAR-10 and CIFAR-100 that SAGE achieves better or comparable performance to the state of the art while being more efficient.
arXiv Detail & Related papers (2022-10-31T19:45:21Z) - Masked Autoencoders are Robust Data Augmentors [90.34825840657774]
Regularization techniques like image augmentation are necessary for deep neural networks to generalize well.
We propose a novel perspective of augmentation to regularize the training process.
We show that utilizing such model-based nonlinear transformation as data augmentation can improve high-level recognition tasks.
arXiv Detail & Related papers (2022-06-10T02:41:48Z) - Feature transforms for image data augmentation [74.12025519234153]
In image classification, many augmentation approaches utilize simple image manipulation algorithms.
In this work, we build ensembles on the data level by adding images generated by combining fourteen augmentation approaches.
Pretrained ResNet50 networks are finetuned on training sets that include images derived from each augmentation method.
arXiv Detail & Related papers (2022-01-24T14:12:29Z) - Augmentation Pathways Network for Visual Recognition [61.33084317147437]
This paper introduces Augmentation Pathways (AP) to stabilize training on a much wider range of augmentation policies.
AP tames heavy data augmentations and stably boosts performance without a careful selection among augmentation policies.
Experimental results on ImageNet benchmarks demonstrate the compatibility and effectiveness on a much wider range of augmentations.
arXiv Detail & Related papers (2021-07-26T06:54:53Z) - Augmentation Inside the Network [1.5260179407438161]
We present augmentation inside the network, a method that simulates data augmentation techniques for computer vision problems.
We validate our method on the ImageNet-2012 and CIFAR-100 datasets for image classification.
arXiv Detail & Related papers (2020-12-19T20:07:03Z) - FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning [64.32306537419498]
We propose a novel learned feature-based refinement and augmentation method that produces a varied set of complex transformations.
These transformations also use information from both within-class and across-class representations that we extract through clustering.
We demonstrate that our method is comparable to current state of art for smaller datasets while being able to scale up to larger datasets.
arXiv Detail & Related papers (2020-07-16T17:55:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.