Augmentation Inside the Network
- URL: http://arxiv.org/abs/2012.10769v2
- Date: Fri, 23 Jun 2023 18:37:27 GMT
- Title: Augmentation Inside the Network
- Authors: Maciej Sypetkowski, Jakub Jasiulewicz, Zbigniew Wojna
- Abstract summary: We present augmentation inside the network, a method that simulates data augmentation techniques for computer vision problems.
We validate our method on the ImageNet-2012 and CIFAR-100 datasets for image classification.
- Score: 1.5260179407438161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present augmentation inside the network, a method that
simulates data augmentation techniques for computer vision problems on
intermediate features of a convolutional neural network. We perform these
transformations, changing the data flow through the network, and sharing common
computations when it is possible. Our method allows us to obtain smoother
speed-accuracy trade-off adjustment and achieves better results than using
standard test-time augmentation (TTA) techniques. Additionally, our approach
can improve model performance even further when coupled with test-time
augmentation. We validate our method on the ImageNet-2012 and CIFAR-100
datasets for image classification. We propose a modification that is 30% faster
than the flip test-time augmentation and achieves the same results for
CIFAR-100.
Related papers
- Image edge enhancement for effective image classification [7.470763273994321]
We propose an edge enhancement-based method to enhance both accuracy and training speed of neural networks.
Our approach involves extracting high frequency features, such as edges, from images within the available dataset and fusing them with the original images.
arXiv Detail & Related papers (2024-01-13T10:01:34Z) - Sorted Convolutional Network for Achieving Continuous Rotational
Invariance [56.42518353373004]
We propose a Sorting Convolution (SC) inspired by some hand-crafted features of texture images.
SC achieves continuous rotational invariance without requiring additional learnable parameters or data augmentation.
Our results demonstrate that SC achieves the best performance in the aforementioned tasks.
arXiv Detail & Related papers (2023-05-23T18:37:07Z) - Feature transforms for image data augmentation [74.12025519234153]
In image classification, many augmentation approaches utilize simple image manipulation algorithms.
In this work, we build ensembles on the data level by adding images generated by combining fourteen augmentation approaches.
Pretrained ResNet50 networks are finetuned on training sets that include images derived from each augmentation method.
arXiv Detail & Related papers (2022-01-24T14:12:29Z) - InAugment: Improving Classifiers via Internal Augmentation [14.281619356571724]
We present a novel augmentation operation, that exploits image internal statistics.
We show improvement over state-of-the-art augmentation techniques.
We also demonstrate an increase for ResNet50 and EfficientNet-B3 top-1's accuracy on the ImageNet dataset.
arXiv Detail & Related papers (2021-04-08T15:37:21Z) - Learning Representational Invariances for Data-Efficient Action
Recognition [52.23716087656834]
We show that our data augmentation strategy leads to promising performance on the Kinetics-100, UCF-101, and HMDB-51 datasets.
We also validate our data augmentation strategy in the fully supervised setting and demonstrate improved performance.
arXiv Detail & Related papers (2021-03-30T17:59:49Z) - Fusion of CNNs and statistical indicators to improve image
classification [65.51757376525798]
Convolutional Networks have dominated the field of computer vision for the last ten years.
Main strategy to prolong this trend relies on further upscaling networks in size.
We hypothesise that adding heterogeneous sources of information may be more cost-effective to a CNN than building a bigger network.
arXiv Detail & Related papers (2020-12-20T23:24:31Z) - Fast Fourier Transformation for Optimizing Convolutional Neural Networks
in Object Recognition [1.0499611180329802]
This paper proposes to use Fast Fourier Transformation-based U-Net (a refined fully convolutional networks) to perform image convolution in neural networks.
We implement the FFT-based convolutional neural network to improve the training time of the network.
Our model demonstrated improvement in training time during convolution from $600-700$ ms/step to $400-500$ ms/step.
arXiv Detail & Related papers (2020-10-08T21:07:55Z) - FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning [64.32306537419498]
We propose a novel learned feature-based refinement and augmentation method that produces a varied set of complex transformations.
These transformations also use information from both within-class and across-class representations that we extract through clustering.
We demonstrate that our method is comparable to current state of art for smaller datasets while being able to scale up to larger datasets.
arXiv Detail & Related papers (2020-07-16T17:55:31Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - On the Generalization Effects of Linear Transformations in Data
Augmentation [32.01435459892255]
Data augmentation is a powerful technique to improve performance in applications such as image and text classification tasks.
We study a family of linear transformations and study their effects on the ridge estimator in an over-parametrized linear regression setting.
We propose an augmentation scheme that searches over the space of transformations by how uncertain the model is about the transformed data.
arXiv Detail & Related papers (2020-05-02T04:10:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.