Probing neural representations of scene perception in a hippocampally
dependent task using artificial neural networks
- URL: http://arxiv.org/abs/2303.06367v1
- Date: Sat, 11 Mar 2023 10:26:25 GMT
- Title: Probing neural representations of scene perception in a hippocampally
dependent task using artificial neural networks
- Authors: Markus Frey, Christian F. Doeller, Caswell Barry
- Abstract summary: Deep artificial neural networks (DNNs) trained through backpropagation provide effective models of the mammalian visual system.
We describe a novel scene perception benchmark inspired by a hippocampal dependent task.
Using a network architecture inspired by the connectivity between temporal lobe structures and the hippocampus, we demonstrate that DNNs trained using a triplet loss can learn this task.
- Score: 1.0312968200748116
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Deep artificial neural networks (DNNs) trained through backpropagation
provide effective models of the mammalian visual system, accurately capturing
the hierarchy of neural responses through primary visual cortex to inferior
temporal cortex (IT). However, the ability of these networks to explain
representations in higher cortical areas is relatively lacking and considerably
less well researched. For example, DNNs have been less successful as a model of
the egocentric to allocentric transformation embodied by circuits in
retrosplenial and posterior parietal cortex. We describe a novel scene
perception benchmark inspired by a hippocampal dependent task, designed to
probe the ability of DNNs to transform scenes viewed from different egocentric
perspectives. Using a network architecture inspired by the connectivity between
temporal lobe structures and the hippocampus, we demonstrate that DNNs trained
using a triplet loss can learn this task. Moreover, by enforcing a factorized
latent space, we can split information propagation into "what" and "where"
pathways, which we use to reconstruct the input. This allows us to beat the
state-of-the-art for unsupervised object segmentation on the CATER and
MOVi-A,B,C benchmarks.
Related papers
- Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.
We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.
We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Spiking representation learning for associative memories [0.0]
We introduce a novel artificial spiking neural network (SNN) that performs unsupervised representation learning and associative memory operations.
The architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories.
arXiv Detail & Related papers (2024-06-05T08:30:11Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Efficient visual object representation using a biologically plausible
spike-latency code and winner-take-all inhibition [0.0]
spiking neural networks (SNNs) have the potential to improve the efficiency and biological plausibility of object recognition systems.
We present a SNN model that uses spike-latency coding and winner-take-all inhibition (WTA-I) to efficiently represent visual stimuli.
We demonstrate that a network of 150 spiking neurons can efficiently represent objects with as little as 40 spikes.
arXiv Detail & Related papers (2022-05-20T17:48:02Z) - Improving Neural Predictivity in the Visual Cortex with Gated Recurrent
Connections [0.0]
We aim to shift the focus on architectures that take into account lateral recurrent connections, a ubiquitous feature of the ventral visual stream, to devise adaptive receptive fields.
In order to increase the robustness of our approach and the biological fidelity of the activations, we employ specific data augmentation techniques.
arXiv Detail & Related papers (2022-03-22T17:27:22Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Drop, Swap, and Generate: A Self-Supervised Approach for Generating
Neural Activity [33.06823702945747]
We introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE.
Our approach combines a generative modeling framework with an instance-specific alignment loss.
We show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
arXiv Detail & Related papers (2021-11-03T16:39:43Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Ventral-Dorsal Neural Networks: Object Detection via Selective Attention [51.79577908317031]
We propose a new framework called Ventral-Dorsal Networks (VDNets)
Inspired by the structure of the human visual system, we propose the integration of a "Ventral Network" and a "Dorsal Network"
Our experimental results reveal that the proposed method outperforms state-of-the-art object detection approaches.
arXiv Detail & Related papers (2020-05-15T23:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.