Explaining V1 Properties with a Biologically Constrained Deep Learning
Architecture
- URL: http://arxiv.org/abs/2305.11275v2
- Date: Thu, 25 May 2023 20:40:28 GMT
- Title: Explaining V1 Properties with a Biologically Constrained Deep Learning
Architecture
- Authors: Galen Pogoncheff, Jacob Granley, Michael Beyeler
- Abstract summary: Convolutional neural networks (CNNs) have emerged as promising models of the ventral visual stream, despite their lack of biological specificity.
We show drastic improvements in model-V1 alignment driven by the integration of architectural components that simulate center-surround antagonism, local receptive fields, tuned normalization, and cortical magnification.
Our results highlight an important advancement in the field of NeuroAI, as we systematically establish a set of architectural components that contribute to unprecedented explanation of V1.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks (CNNs) have recently emerged as promising
models of the ventral visual stream, despite their lack of biological
specificity. While current state-of-the-art models of the primary visual cortex
(V1) have surfaced from training with adversarial examples and extensively
augmented data, these models are still unable to explain key neural properties
observed in V1 that arise from biological circuitry. To address this gap, we
systematically incorporated neuroscience-derived architectural components into
CNNs to identify a set of mechanisms and architectures that comprehensively
explain neural activity in V1. We show drastic improvements in model-V1
alignment driven by the integration of architectural components that simulate
center-surround antagonism, local receptive fields, tuned normalization, and
cortical magnification. Upon enhancing task-driven CNNs with a collection of
these specialized components, we uncover models with latent representations
that yield state-of-the-art explanation of V1 neural activity and tuning
properties. Our results highlight an important advancement in the field of
NeuroAI, as we systematically establish a set of architectural components that
contribute to unprecedented explanation of V1. The neuroscience insights that
could be gleaned from increasingly accurate in-silico models of the brain have
the potential to greatly advance the fields of both neuroscience and artificial
intelligence.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Neural Dynamics Model of Visual Decision-Making: Learning from Human Experts [28.340344705437758]
We implement a comprehensive visual decision-making model that spans from visual input to behavioral output.
Our model aligns closely with human behavior and reflects neural activities in primates.
A neuroimaging-informed fine-tuning approach was introduced and applied to the model, leading to performance improvements.
arXiv Detail & Related papers (2024-09-04T02:38:52Z) - The Dynamic Net Architecture: Learning Robust and Holistic Visual Representations Through Self-Organizing Networks [3.9848584845601014]
We present a novel intelligent-system architecture called "Dynamic Net Architecture" (DNA)
DNA relies on recurrence-stabilized networks and discuss it in application to vision.
arXiv Detail & Related papers (2024-07-08T06:22:10Z) - Unveiling the Unseen: Identifiable Clusters in Trained Depthwise
Convolutional Kernels [56.69755544814834]
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures.
This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers.
arXiv Detail & Related papers (2024-01-25T19:05:53Z) - Neural Echos: Depthwise Convolutional Filters Replicate Biological
Receptive Fields [56.69755544814834]
We present evidence suggesting that depthwise convolutional kernels are effectively replicating the biological receptive fields observed in the mammalian retina.
We propose a scheme that draws inspiration from the biological receptive fields.
arXiv Detail & Related papers (2024-01-18T18:06:22Z) - Astromorphic Self-Repair of Neuromorphic Hardware Systems [0.8958368012475248]
This paper attempts to explore the self-repair role of glial cells, in particular, astrocytes.
Hardware-software co-design analysis reveals that bio-morphic astrocytic regulation has the potential to self-repair hardware realistic faults.
arXiv Detail & Related papers (2022-09-15T16:23:45Z) - Top-down inference in an early visual cortex inspired hierarchical
Variational Autoencoder [0.0]
We exploit advances in Variational Autoencoders to investigate the early visual cortex with sparse coding hierarchical VAEs trained on natural images.
We show that representations similar to the one found in the primary and secondary visual cortices naturally emerge under mild inductive biases.
We show that a neuroscience-inspired choice of the recognition model is critical for two signatures of computations with generative models.
arXiv Detail & Related papers (2022-06-01T12:21:58Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Modelling Neuronal Behaviour with Time Series Regression: Recurrent
Neural Networks on C. Elegans Data [0.0]
We show how the nervous system of C. Elegans can be modelled and simulated with data-driven models using different neural network architectures.
We show that GRU models with a hidden layer size of 4 units are able to accurately reproduce with high accuracy the system's response to very different stimuli.
arXiv Detail & Related papers (2021-07-01T10:39:30Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Emergence of Lie symmetries in functional architectures learned by CNNs [63.69764116066748]
We study the spontaneous development of symmetries in the early layers of a Convolutional Neural Network (CNN) during learning on natural images.
Our architecture is built in such a way to mimic the early stages of biological visual systems.
arXiv Detail & Related papers (2021-04-17T13:23:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.