Matching the Neuronal Representations of V1 is Necessary to Improve
Robustness in CNNs with V1-like Front-ends
- URL: http://arxiv.org/abs/2310.10575v1
- Date: Mon, 16 Oct 2023 16:52:15 GMT
- Title: Matching the Neuronal Representations of V1 is Necessary to Improve
Robustness in CNNs with V1-like Front-ends
- Authors: Ruxandra Barbulescu, Tiago Marques, Arlindo L. Oliveira
- Abstract summary: Recently, it was shown that simulating computations in early visual areas at the front of convolutional neural networks leads to improvements in robustness to image corruptions.
Here, we show that the neuronal representations that emerge from precisely matching the distribution of RF properties found in primate V1 is key for this improvement in robustness.
- Score: 1.8434042562191815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While some convolutional neural networks (CNNs) have achieved great success
in object recognition, they struggle to identify objects in images corrupted
with different types of common noise patterns. Recently, it was shown that
simulating computations in early visual areas at the front of CNNs leads to
improvements in robustness to image corruptions. Here, we further explore this
result and show that the neuronal representations that emerge from precisely
matching the distribution of RF properties found in primate V1 is key for this
improvement in robustness. We built two variants of a model with a front-end
modeling the primate primary visual cortex (V1): one sampling RF properties
uniformly and the other sampling from empirical biological distributions. The
model with the biological sampling has a considerably higher robustness to
image corruptions that the uniform variant (relative difference of 8.72%).
While similar neuronal sub-populations across the two variants have similar
response properties and learn similar downstream weights, the impact on
downstream processing is strikingly different. This result sheds light on the
origin of the improvements in robustness observed in some biologically-inspired
models, pointing to the need of precisely mimicking the neuronal
representations found in the primate brain.
Related papers
- Explicitly Modeling Pre-Cortical Vision with a Neuro-Inspired Front-End Improves CNN Robustness [1.8434042562191815]
CNNs struggle to classify images corrupted with common corruptions.
Recent work has shown that incorporating a CNN front-end block that simulates some features of the primate primary visual cortex (V1) can improve overall model robustness.
We introduce two novel biologically-inspired CNN model families that incorporate a new front-end block designed to simulate pre-cortical visual processing.
arXiv Detail & Related papers (2024-09-25T11:43:29Z) - Benchmarking Out-of-Distribution Generalization Capabilities of DNN-based Encoding Models for the Ventral Visual Cortex [26.91313901714098]
textitMacaqueITBench is a large-scale dataset of neural population responses from the macaque inferior temporal (IT) cortex.
We investigated the impact of distribution shifts on models predicting neural activity by dividing the images into Out-Of-Distribution (OOD) train and test splits.
arXiv Detail & Related papers (2024-06-16T20:33:57Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - V1T: large-scale mouse V1 response prediction using a Vision Transformer [1.5703073293718952]
We introduce V1T, a novel Vision Transformer based architecture that learns a shared visual and behavioral representation across animals.
We evaluate our model on two large datasets recorded from mouse primary visual cortex and outperform previous convolution-based models by more than 12.7% in prediction performance.
arXiv Detail & Related papers (2023-02-06T18:58:38Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - A precortical module for robust CNNs to light variations [0.0]
We present a simple mathematical model for the mammalian low visual pathway, taking into account its key elements: retina, lateral geniculate nucleus (LGN), primary visual cortex (V1)
The analogies between the cortical level of the visual system and the structure of popular CNNs, used in image classification tasks, suggest the introduction of an additional preliminary convolutional module inspired to precortical neuronal circuits to improve robustness with respect to global light intensity and contrast variations in the input images.
We validate our hypothesis on the popular databases MNIST, FashionMNIST and SVHN, obtaining significantly more robust CNNs with respect to these variations,
arXiv Detail & Related papers (2022-02-15T14:18:40Z) - Combining Different V1 Brain Model Variants to Improve Robustness to
Image Corruptions in CNNs [5.875680381119361]
We show that simulating a primary visual cortex (V1) at the front of convolutional neural networks (CNNs) leads to small improvements in robustness to image perturbations.
We build a new model using an ensembling technique, which combines multiple individual models with different V1 front-end variants.
We show that using distillation, it is possible to partially compress the knowledge in the ensemble model into a single model with a V1 front-end.
arXiv Detail & Related papers (2021-10-20T16:35:09Z) - Prediction of progressive lens performance from neural network
simulations [62.997667081978825]
The purpose of this study is to present a framework to predict visual acuity (VA) based on a convolutional neural network (CNN)
The proposed holistic simulation tool was shown to act as an accurate model for subjective visual performance.
arXiv Detail & Related papers (2021-03-19T14:51:02Z) - Fooling the primate brain with minimal, targeted image manipulation [67.78919304747498]
We propose an array of methods for creating minimal, targeted image perturbations that lead to changes in both neuronal activity and perception as reflected in behavior.
Our work shares the same goal with adversarial attack, namely the manipulation of images with minimal, targeted noise that leads ANN models to misclassify the images.
arXiv Detail & Related papers (2020-11-11T08:30:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.