Improving Neural Predictivity in the Visual Cortex with Gated Recurrent
Connections
- URL: http://arxiv.org/abs/2203.11910v1
- Date: Tue, 22 Mar 2022 17:27:22 GMT
- Title: Improving Neural Predictivity in the Visual Cortex with Gated Recurrent
Connections
- Authors: Simone Azeglio, Simone Poetto, Luca Savant Aira, Marco Nurisso
- Abstract summary: We aim to shift the focus on architectures that take into account lateral recurrent connections, a ubiquitous feature of the ventral visual stream, to devise adaptive receptive fields.
In order to increase the robustness of our approach and the biological fidelity of the activations, we employ specific data augmentation techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Computational models of vision have traditionally been developed in a
bottom-up fashion, by hierarchically composing a series of straightforward
operations - i.e. convolution and pooling - with the aim of emulating simple
and complex cells in the visual cortex, resulting in the introduction of deep
convolutional neural networks (CNNs). Nevertheless, data obtained with recent
neuronal recording techniques support that the nature of the computations
carried out in the ventral visual stream is not completely captured by current
deep CNN models. To fill the gap between the ventral visual stream and deep
models, several benchmarks have been designed and organized into the
Brain-Score platform, granting a way to perform multi-layer (V1, V2, V4, IT)
and behavioral comparisons between the two counterparts. In our work, we aim to
shift the focus on architectures that take into account lateral recurrent
connections, a ubiquitous feature of the ventral visual stream, to devise
adaptive receptive fields. Through recurrent connections, the input s
long-range spatial dependencies can be captured in a local multi-step fashion
and, as introduced with Gated Recurrent CNNs (GRCNN), the unbounded expansion
of the neuron s receptive fields can be modulated through the use of gates. In
order to increase the robustness of our approach and the biological fidelity of
the activations, we employ specific data augmentation techniques in line with
several of the scoring benchmarks. Enforcing some form of invariance, through
heuristics, was found to be beneficial for better neural predictivity.
Related papers
- The Dynamic Net Architecture: Learning Robust and Holistic Visual Representations Through Self-Organizing Networks [3.9848584845601014]
We present a novel intelligent-system architecture called "Dynamic Net Architecture" (DNA)
DNA relies on recurrence-stabilized networks and discuss it in application to vision.
arXiv Detail & Related papers (2024-07-08T06:22:10Z) - Deep Continuous Networks [21.849285945717632]
We propose deep continuous networks (DCNs), which combine spatially continuous filters, with the continuous depth framework of neural ODEs.
This allows us to learn the spatial support of the filters during training, as well as model the continuous evolution of feature maps, linking DCNs closely to biological models.
We show that DCNs are versatile and highly applicable to standard image classification and reconstruction problems, where they improve parameter and data efficiency, and allow for meta-parametrization.
arXiv Detail & Related papers (2024-02-02T16:50:18Z) - Unveiling the Unseen: Identifiable Clusters in Trained Depthwise
Convolutional Kernels [56.69755544814834]
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures.
This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers.
arXiv Detail & Related papers (2024-01-25T19:05:53Z) - Singular Value Representation: A New Graph Perspective On Neural
Networks [0.0]
We introduce the Singular Value Representation (SVR), a new method to represent the internal state of neural networks.
We derive a precise statistical framework to discriminate meaningful connections between spectral neurons for fully connected and convolutional layers.
arXiv Detail & Related papers (2023-02-16T10:10:31Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Online neural connectivity estimation with ensemble stimulation [5.156484100374058]
We propose a method based on noisy group testing that drastically increases the efficiency of this process in sparse networks.
We show that it is possible to recover binarized network connectivity with a number of tests that grows only logarithmically with population size.
We also demonstrate the feasibility of inferring connectivity for networks of up to tens of thousands of neurons online.
arXiv Detail & Related papers (2020-07-27T23:47:03Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.