A precortical module for robust CNNs to light variations
- URL: http://arxiv.org/abs/2202.07432v1
- Date: Tue, 15 Feb 2022 14:18:40 GMT
- Title: A precortical module for robust CNNs to light variations
- Authors: R. Fioresi, J. Petkovic
- Abstract summary: We present a simple mathematical model for the mammalian low visual pathway, taking into account its key elements: retina, lateral geniculate nucleus (LGN), primary visual cortex (V1)
The analogies between the cortical level of the visual system and the structure of popular CNNs, used in image classification tasks, suggest the introduction of an additional preliminary convolutional module inspired to precortical neuronal circuits to improve robustness with respect to global light intensity and contrast variations in the input images.
We validate our hypothesis on the popular databases MNIST, FashionMNIST and SVHN, obtaining significantly more robust CNNs with respect to these variations,
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a simple mathematical model for the mammalian low visual pathway,
taking into account its key elements: retina, lateral geniculate nucleus (LGN),
primary visual cortex (V1). The analogies between the cortical level of the
visual system and the structure of popular CNNs, used in image classification
tasks, suggests the introduction of an additional preliminary convolutional
module inspired to precortical neuronal circuits to improve robustness with
respect to global light intensity and contrast variations in the input images.
We validate our hypothesis on the popular databases MNIST, FashionMNIST and
SVHN, obtaining significantly more robust CNNs with respect to these
variations, once such extra module is added.
Related papers
- Explicitly Modeling Pre-Cortical Vision with a Neuro-Inspired Front-End Improves CNN Robustness [1.8434042562191815]
CNNs struggle to classify images corrupted with common corruptions.
Recent work has shown that incorporating a CNN front-end block that simulates some features of the primate primary visual cortex (V1) can improve overall model robustness.
We introduce two novel biologically-inspired CNN model families that incorporate a new front-end block designed to simulate pre-cortical visual processing.
arXiv Detail & Related papers (2024-09-25T11:43:29Z) - Unveiling the Unseen: Identifiable Clusters in Trained Depthwise
Convolutional Kernels [56.69755544814834]
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures.
This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers.
arXiv Detail & Related papers (2024-01-25T19:05:53Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Matching the Neuronal Representations of V1 is Necessary to Improve
Robustness in CNNs with V1-like Front-ends [1.8434042562191815]
Recently, it was shown that simulating computations in early visual areas at the front of convolutional neural networks leads to improvements in robustness to image corruptions.
Here, we show that the neuronal representations that emerge from precisely matching the distribution of RF properties found in primate V1 is key for this improvement in robustness.
arXiv Detail & Related papers (2023-10-16T16:52:15Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Improving Neural Predictivity in the Visual Cortex with Gated Recurrent
Connections [0.0]
We aim to shift the focus on architectures that take into account lateral recurrent connections, a ubiquitous feature of the ventral visual stream, to devise adaptive receptive fields.
In order to increase the robustness of our approach and the biological fidelity of the activations, we employ specific data augmentation techniques.
arXiv Detail & Related papers (2022-03-22T17:27:22Z) - Scopeformer: n-CNN-ViT Hybrid Model for Intracranial Hemorrhage
Classification [0.0]
We propose a feature generator composed of an ensemble of convolutional neuralnetworks (CNNs) to improve the Vision Transformer (ViT) models.
We show that by gradually stacking several feature maps extracted using multiple Xception CNNs, we can develop a feature-rich input for the ViT model.
arXiv Detail & Related papers (2021-07-07T20:20:24Z) - Emergence of Lie symmetries in functional architectures learned by CNNs [63.69764116066748]
We study the spontaneous development of symmetries in the early layers of a Convolutional Neural Network (CNN) during learning on natural images.
Our architecture is built in such a way to mimic the early stages of biological visual systems.
arXiv Detail & Related papers (2021-04-17T13:23:26Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - ACDC: Weight Sharing in Atom-Coefficient Decomposed Convolution [57.635467829558664]
We introduce a structural regularization across convolutional kernels in a CNN.
We show that CNNs now maintain performance with dramatic reduction in parameters and computations.
arXiv Detail & Related papers (2020-09-04T20:41:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.