Random Padding Data Augmentation
- URL: http://arxiv.org/abs/2302.08682v1
- Date: Fri, 17 Feb 2023 04:15:33 GMT
- Title: Random Padding Data Augmentation
- Authors: Nan Yang, Laicheng Zhong, Fan Huang, Dong Yuan and Wei Bao
- Abstract summary: convolutional neural network (CNN) learns the same object in different positions in images.
The usefulness of the features' spatial information in CNNs has not been well investigated.
We introduce Random Padding, a new type of padding method for training CNNs.
- Score: 23.70951896315126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The convolutional neural network (CNN) learns the same object in different
positions in images, which can improve the recognition accuracy of the model.
An implication of this is that CNN may know where the object is. The usefulness
of the features' spatial information in CNNs has not been well investigated. In
this paper, we found that the model's learning of features' position
information hindered the learning of the features' relationship. Therefore, we
introduced Random Padding, a new type of padding method for training CNNs that
impairs the architecture's capacity to learn position information by adding
zero-padding randomly to half of the border of feature maps. Random Padding is
parameter-free, simple to construct, and compatible with the majority of
CNN-based recognition models. This technique is also complementary to data
augmentations such as random cropping, rotation, flipping and erasing, and
consistently improves the performance of image classification over strong
baselines.
Related papers
- Fuzzy Convolution Neural Networks for Tabular Data Classification [0.0]
Convolutional neural networks (CNNs) have attracted a great deal of attention due to their remarkable performance in various domains.
In this paper, we propose a novel framework fuzzy convolution neural network (FCNN) tailored specifically for tabular data.
arXiv Detail & Related papers (2024-06-04T20:33:35Z) - A novel feature-scrambling approach reveals the capacity of
convolutional neural networks to learn spatial relations [0.0]
Convolutional neural networks (CNNs) are one of the most successful computer vision systems to solve object recognition.
Yet it remains poorly understood how CNNs actually make their decisions, what the nature of their internal representations is, and how their recognition strategies differ from humans.
arXiv Detail & Related papers (2022-12-12T16:40:29Z) - Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - What Can Be Learnt With Wide Convolutional Neural Networks? [69.55323565255631]
We study infinitely-wide deep CNNs in the kernel regime.
We prove that deep CNNs adapt to the spatial scale of the target function.
We conclude by computing the generalisation error of a deep CNN trained on the output of another deep CNN.
arXiv Detail & Related papers (2022-08-01T17:19:32Z) - Learning sparse features can lead to overfitting in neural networks [9.2104922520782]
We show that feature learning can perform worse than lazy training.
Although sparsity is known to be essential for learning anisotropic data, it is detrimental when the target function is constant or smooth.
arXiv Detail & Related papers (2022-06-24T14:26:33Z) - Keypoint Message Passing for Video-based Person Re-Identification [106.41022426556776]
Video-based person re-identification (re-ID) is an important technique in visual surveillance systems which aims to match video snippets of people captured by different cameras.
Existing methods are mostly based on convolutional neural networks (CNNs), whose building blocks either process local neighbor pixels at a time, or, when 3D convolutions are used to model temporal information, suffer from the misalignment problem caused by person movement.
In this paper, we propose to overcome the limitations of normal convolutions with a human-oriented graph method. Specifically, features located at person joint keypoints are extracted and connected as a spatial-temporal graph
arXiv Detail & Related papers (2021-11-16T08:01:16Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Exploring the Interchangeability of CNN Embedding Spaces [0.5735035463793008]
We map between 10 image-classification CNNs and between 4 facial-recognition CNNs.
For CNNs trained to the same classes and sharing a common backend-logit architecture, a linear-mapping may always be calculated directly from the backend layer weights.
The implications are far-reaching, suggesting an underlying commonality between representations learned by networks designed and trained for a common task.
arXiv Detail & Related papers (2020-10-05T20:32:40Z) - Informative Dropout for Robust Representation Learning: A Shape-bias
Perspective [84.30946377024297]
We propose a light-weight model-agnostic method, namely Informative Dropout (InfoDrop), to improve interpretability and reduce texture bias.
Specifically, we discriminate texture from shape based on local self-information in an image, and adopt a Dropout-like algorithm to decorrelate the model output from the local texture.
arXiv Detail & Related papers (2020-08-10T16:52:24Z) - Teaching CNNs to mimic Human Visual Cognitive Process & regularise
Texture-Shape bias [18.003188982585737]
Recent experiments in computer vision demonstrate texture bias as the primary reason for supreme results in models employing Convolutional Neural Networks (CNNs)
It is believed that the cost function forces the CNN to take a greedy approach and develop a proclivity for local information like texture to increase accuracy, thus failing to explore any global statistics.
We propose CognitiveCNN, a new intuitive architecture, inspired from feature integration theory in psychology to utilise human interpretable feature like shape, texture, edges etc. to reconstruct, and classify the image.
arXiv Detail & Related papers (2020-06-25T22:32:54Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.