Informative Dropout for Robust Representation Learning: A Shape-bias
Perspective
- URL: http://arxiv.org/abs/2008.04254v1
- Date: Mon, 10 Aug 2020 16:52:24 GMT
- Title: Informative Dropout for Robust Representation Learning: A Shape-bias
Perspective
- Authors: Baifeng Shi, Dinghuai Zhang, Qi Dai, Zhanxing Zhu, Yadong Mu, Jingdong
Wang
- Abstract summary: We propose a light-weight model-agnostic method, namely Informative Dropout (InfoDrop), to improve interpretability and reduce texture bias.
Specifically, we discriminate texture from shape based on local self-information in an image, and adopt a Dropout-like algorithm to decorrelate the model output from the local texture.
- Score: 84.30946377024297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional Neural Networks (CNNs) are known to rely more on local texture
rather than global shape when making decisions. Recent work also indicates a
close relationship between CNN's texture-bias and its robustness against
distribution shift, adversarial perturbation, random corruption, etc. In this
work, we attempt at improving various kinds of robustness universally by
alleviating CNN's texture bias. With inspiration from the human visual system,
we propose a light-weight model-agnostic method, namely Informative Dropout
(InfoDrop), to improve interpretability and reduce texture bias. Specifically,
we discriminate texture from shape based on local self-information in an image,
and adopt a Dropout-like algorithm to decorrelate the model output from the
local texture. Through extensive experiments, we observe enhanced robustness
under various scenarios (domain generalization, few-shot classification, image
corruption, and adversarial perturbation). To the best of our knowledge, this
work is one of the earliest attempts to improve different kinds of robustness
in a unified model, shedding new light on the relationship between shape-bias
and robustness, also on new approaches to trustworthy machine learning
algorithms. Code is available at https://github.com/bfshi/InfoDrop.
Related papers
- Emergence of Shape Bias in Convolutional Neural Networks through
Activation Sparsity [8.54598311798543]
Current deep-learning models for object recognition are heavily biased toward texture.
In contrast, human visual systems are known to be biased toward shape and structure.
We show that sparse coding, a ubiquitous principle in the brain, can in itself introduce shape bias into the network.
arXiv Detail & Related papers (2023-10-29T04:07:52Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Random Padding Data Augmentation [23.70951896315126]
convolutional neural network (CNN) learns the same object in different positions in images.
The usefulness of the features' spatial information in CNNs has not been well investigated.
We introduce Random Padding, a new type of padding method for training CNNs.
arXiv Detail & Related papers (2023-02-17T04:15:33Z) - Does enhanced shape bias improve neural network robustness to common
corruptions? [14.607217936005817]
Recent work indicates that CNNs trained on ImageNet are biased towards features that encode textures.
It has been shown that augmenting the training data with different image styles decreases this texture bias in favor of increased shape bias.
We perform a systematic study of different ways of composing inputs based on natural images, explicit edge information, and stylization.
arXiv Detail & Related papers (2021-04-20T07:06:53Z) - Shape-Texture Debiased Neural Network Training [50.6178024087048]
Convolutional Neural Networks are often biased towards either texture or shape, depending on the training dataset.
We develop an algorithm for shape-texture debiased learning.
Experiments show that our method successfully improves model performance on several image recognition benchmarks.
arXiv Detail & Related papers (2020-10-12T19:16:12Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z) - Teaching CNNs to mimic Human Visual Cognitive Process & regularise
Texture-Shape bias [18.003188982585737]
Recent experiments in computer vision demonstrate texture bias as the primary reason for supreme results in models employing Convolutional Neural Networks (CNNs)
It is believed that the cost function forces the CNN to take a greedy approach and develop a proclivity for local information like texture to increase accuracy, thus failing to explore any global statistics.
We propose CognitiveCNN, a new intuitive architecture, inspired from feature integration theory in psychology to utilise human interpretable feature like shape, texture, edges etc. to reconstruct, and classify the image.
arXiv Detail & Related papers (2020-06-25T22:32:54Z) - The shape and simplicity biases of adversarially robust ImageNet-trained
CNNs [9.707679445925516]
We study the shape bias and internal mechanisms that enable the generalizability of AlexNet, GoogLeNet, and ResNet-50 models trained via adversarial training.
Remarkably, adversarial training induces three simplicity biases into hidden neurons in the process of "robustifying" CNNs.
arXiv Detail & Related papers (2020-06-16T16:38:16Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.