Teaching CNNs to mimic Human Visual Cognitive Process & regularise
Texture-Shape bias
- URL: http://arxiv.org/abs/2006.14722v2
- Date: Mon, 10 Jan 2022 00:43:26 GMT
- Title: Teaching CNNs to mimic Human Visual Cognitive Process & regularise
Texture-Shape bias
- Authors: Satyam Mohla, Anshul Nasery and Biplab Banerjee
- Abstract summary: Recent experiments in computer vision demonstrate texture bias as the primary reason for supreme results in models employing Convolutional Neural Networks (CNNs)
It is believed that the cost function forces the CNN to take a greedy approach and develop a proclivity for local information like texture to increase accuracy, thus failing to explore any global statistics.
We propose CognitiveCNN, a new intuitive architecture, inspired from feature integration theory in psychology to utilise human interpretable feature like shape, texture, edges etc. to reconstruct, and classify the image.
- Score: 18.003188982585737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent experiments in computer vision demonstrate texture bias as the primary
reason for supreme results in models employing Convolutional Neural Networks
(CNNs), conflicting with early works claiming that these networks identify
objects using shape. It is believed that the cost function forces the CNN to
take a greedy approach and develop a proclivity for local information like
texture to increase accuracy, thus failing to explore any global statistics. We
propose CognitiveCNN, a new intuitive architecture, inspired from feature
integration theory in psychology to utilise human interpretable feature like
shape, texture, edges etc. to reconstruct, and classify the image. We define
novel metrics to quantify the "relevance" of "abstract information" present in
these modalities using attention maps. We further introduce a regularisation
method which ensures that each modality like shape, texture etc. gets
proportionate influence in a given task, as it does for reconstruction; and
perform experiments to show the resulting boost in accuracy and robustness,
besides imparting explainability to these CNNs for achieving superior
performance in object recognition.
Related papers
- Random Padding Data Augmentation [23.70951896315126]
convolutional neural network (CNN) learns the same object in different positions in images.
The usefulness of the features' spatial information in CNNs has not been well investigated.
We introduce Random Padding, a new type of padding method for training CNNs.
arXiv Detail & Related papers (2023-02-17T04:15:33Z) - A novel feature-scrambling approach reveals the capacity of
convolutional neural networks to learn spatial relations [0.0]
Convolutional neural networks (CNNs) are one of the most successful computer vision systems to solve object recognition.
Yet it remains poorly understood how CNNs actually make their decisions, what the nature of their internal representations is, and how their recognition strategies differ from humans.
arXiv Detail & Related papers (2022-12-12T16:40:29Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Shape or Texture: Understanding Discriminative Features in CNNs [28.513300496205044]
Recent studies have shown that CNNs actually exhibit a texture bias'
We show that a network learns the majority of overall shape information at the first few epochs of training.
We also show that the encoding of shape does not imply the encoding of localized per-pixel semantic information.
arXiv Detail & Related papers (2021-01-27T18:54:00Z) - Informative Dropout for Robust Representation Learning: A Shape-bias
Perspective [84.30946377024297]
We propose a light-weight model-agnostic method, namely Informative Dropout (InfoDrop), to improve interpretability and reduce texture bias.
Specifically, we discriminate texture from shape based on local self-information in an image, and adopt a Dropout-like algorithm to decorrelate the model output from the local texture.
arXiv Detail & Related papers (2020-08-10T16:52:24Z) - Eigen-CAM: Class Activation Map using Principal Components [1.2691047660244335]
This paper builds on previous ideas to cope with the increasing demand for interpretable, robust, and transparent models.
The proposed Eigen-CAM computes and visualizes the principle components of the learned features/representations from the convolutional layers.
arXiv Detail & Related papers (2020-08-01T17:14:13Z) - Decoding CNN based Object Classifier Using Visualization [6.666597301197889]
We visualize what type of features are extracted in different convolution layers of CNN.
Visualizing heat map of activation helps us to understand how CNN classifies and localizes different objects in image.
arXiv Detail & Related papers (2020-07-15T05:01:27Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.