Does enhanced shape bias improve neural network robustness to common
corruptions?
- URL: http://arxiv.org/abs/2104.09789v1
- Date: Tue, 20 Apr 2021 07:06:53 GMT
- Title: Does enhanced shape bias improve neural network robustness to common
corruptions?
- Authors: Chaithanya Kumar Mummadi, Ranjitha Subramaniam, Robin Hutmacher,
Julien Vitay, Volker Fischer, Jan Hendrik Metzen
- Abstract summary: Recent work indicates that CNNs trained on ImageNet are biased towards features that encode textures.
It has been shown that augmenting the training data with different image styles decreases this texture bias in favor of increased shape bias.
We perform a systematic study of different ways of composing inputs based on natural images, explicit edge information, and stylization.
- Score: 14.607217936005817
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Convolutional neural networks (CNNs) learn to extract representations of
complex features, such as object shapes and textures to solve image recognition
tasks. Recent work indicates that CNNs trained on ImageNet are biased towards
features that encode textures and that these alone are sufficient to generalize
to unseen test data from the same distribution as the training data but often
fail to generalize to out-of-distribution data. It has been shown that
augmenting the training data with different image styles decreases this texture
bias in favor of increased shape bias while at the same time improving
robustness to common corruptions, such as noise and blur. Commonly, this is
interpreted as shape bias increasing corruption robustness. However, this
relationship is only hypothesized. We perform a systematic study of different
ways of composing inputs based on natural images, explicit edge information,
and stylization. While stylization is essential for achieving high corruption
robustness, we do not find a clear correlation between shape bias and
robustness. We conclude that the data augmentation caused by style-variation
accounts for the improved corruption robustness and increased shape bias is
only a byproduct.
Related papers
- Emergence of Shape Bias in Convolutional Neural Networks through
Activation Sparsity [8.54598311798543]
Current deep-learning models for object recognition are heavily biased toward texture.
In contrast, human visual systems are known to be biased toward shape and structure.
We show that sparse coding, a ubiquitous principle in the brain, can in itself introduce shape bias into the network.
arXiv Detail & Related papers (2023-10-29T04:07:52Z) - Frequency-Based Vulnerability Analysis of Deep Learning Models against
Image Corruptions [48.34142457385199]
We present MUFIA, an algorithm designed to identify the specific types of corruptions that can cause models to fail.
We find that even state-of-the-art models trained to be robust against known common corruptions struggle against the low visibility-based corruptions crafted by MUFIA.
arXiv Detail & Related papers (2023-06-12T15:19:13Z) - Robustness and invariance properties of image classifiers [8.970032486260695]
Deep neural networks have achieved impressive results in many image classification tasks.
Deep networks are not robust to a large variety of semantic-preserving image modifications.
The poor robustness of image classifiers to small data distribution shifts raises serious concerns regarding their trustworthiness.
arXiv Detail & Related papers (2022-08-30T11:00:59Z) - Data Generation using Texture Co-occurrence and Spatial Self-Similarity
for Debiasing [6.976822832216875]
We propose a novel de-biasing approach that explicitly generates additional images using texture representations of oppositely labeled images.
Every new generated image contains similar spatial information from a source image while transferring textures from a target image of opposite label.
Our model integrates a texture co-occurrence loss that determines whether a generated image's texture is similar to that of the target, and a spatial self-similarity loss that determines whether the spatial details between the generated and source images are well preserved.
arXiv Detail & Related papers (2021-10-15T08:04:59Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - Counterfactual Generative Networks [59.080843365828756]
We propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision.
By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background.
We show that the counterfactual images can improve out-of-distribution with a marginal drop in performance on the original classification task.
arXiv Detail & Related papers (2021-01-15T10:23:12Z) - Shape-Texture Debiased Neural Network Training [50.6178024087048]
Convolutional Neural Networks are often biased towards either texture or shape, depending on the training dataset.
We develop an algorithm for shape-texture debiased learning.
Experiments show that our method successfully improves model performance on several image recognition benchmarks.
arXiv Detail & Related papers (2020-10-12T19:16:12Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Informative Dropout for Robust Representation Learning: A Shape-bias
Perspective [84.30946377024297]
We propose a light-weight model-agnostic method, namely Informative Dropout (InfoDrop), to improve interpretability and reduce texture bias.
Specifically, we discriminate texture from shape based on local self-information in an image, and adopt a Dropout-like algorithm to decorrelate the model output from the local texture.
arXiv Detail & Related papers (2020-08-10T16:52:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.