Deep recurrent spiking neural networks capture both static and dynamic
representations of the visual cortex under movie stimuli
- URL: http://arxiv.org/abs/2306.01354v1
- Date: Fri, 2 Jun 2023 08:25:58 GMT
- Title: Deep recurrent spiking neural networks capture both static and dynamic
representations of the visual cortex under movie stimuli
- Authors: Liwei Huang, ZhengYu Ma, Huihui Zhou, Yonghong Tian
- Abstract summary: In the real world, visual stimuli received by the biological visual system are predominantly dynamic rather than static.
In this work, we apply deep recurrent SNNs to model the mouse visual cortex under movie stimuli.
We establish that these networks are competent to capture both static and dynamic representations.
- Score: 19.166875407309263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the real world, visual stimuli received by the biological visual system
are predominantly dynamic rather than static. A better understanding of how the
visual cortex represents movie stimuli could provide deeper insight into the
information processing mechanisms of the visual system. Although some progress
has been made in modeling neural responses to natural movies with deep neural
networks, the visual representations of static and dynamic information under
such time-series visual stimuli remain to be further explored. In this work,
considering abundant recurrent connections in the mouse visual system, we
design a recurrent module based on the hierarchy of the mouse cortex and add it
into Deep Spiking Neural Networks, which have been demonstrated to be a more
compelling computational model for the visual cortex. Using Time-Series
Representational Similarity Analysis, we measure the representational
similarity between networks and mouse cortical regions under natural movie
stimuli. Subsequently, we conduct a comparison of the representational
similarity across recurrent/feedforward networks and image/video training
tasks. Trained on the video action recognition task, recurrent SNN achieves the
highest representational similarity and significantly outperforms feedforward
SNN trained on the same task by 15% and the recurrent SNN trained on the image
classification task by 8%. We investigate how static and dynamic
representations of SNNs influence the similarity, as a way to explain the
importance of these two forms of representations in biological neural coding.
Taken together, our work is the first to apply deep recurrent SNNs to model the
mouse visual cortex under movie stimuli and we establish that these networks
are competent to capture both static and dynamic representations and make
contributions to understanding the movie information processing mechanisms of
the visual cortex.
Related papers
- A spatiotemporal style transfer algorithm for dynamic visual stimulus
generation [0.0]
We introduce the Spatiotemporal Style Transfer (STST) algorithm, a dynamic visual stimulus generation framework.
It is based on a two-stream deep neural network model that factorizes spatial and temporal features to generate dynamic visual stimuli.
We show that our algorithm enables the generation of model metamers, dynamic stimuli whose layer activations are matched to those of natural videos.
arXiv Detail & Related papers (2024-03-07T23:07:46Z) - Brain-like representational straightening of natural movies in robust
feedforward neural networks [2.8749107965043286]
Representational straightening refers to a decrease in curvature of visual feature representations of a sequence of frames taken from natural movies.
We show robustness to noise in the input image can produce representational straightening in feedforward neural networks.
arXiv Detail & Related papers (2023-08-26T13:04:36Z) - Controllable Mind Visual Diffusion Model [58.83896307930354]
Brain signal visualization has emerged as an active research area, serving as a critical interface between the human visual system and computer vision models.
We propose a novel approach, referred to as Controllable Mind Visual Model Diffusion (CMVDM)
CMVDM extracts semantic and silhouette information from fMRI data using attribute alignment and assistant networks.
We then leverage a control model to fully exploit the extracted information for image synthesis, resulting in generated images that closely resemble the visual stimuli in terms of semantics and silhouette.
arXiv Detail & Related papers (2023-05-17T11:36:40Z) - Contrastive-Signal-Dependent Plasticity: Forward-Forward Learning of
Spiking Neural Systems [73.18020682258606]
We develop a neuro-mimetic architecture, composed of spiking neuronal units, where individual layers of neurons operate in parallel.
We propose an event-based generalization of forward-forward learning, which we call contrastive-signal-dependent plasticity (CSDP)
Our experimental results on several pattern datasets demonstrate that the CSDP process works well for training a dynamic recurrent spiking network capable of both classification and reconstruction.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Deep Auto-encoder with Neural Response [8.797970797884023]
We propose a hybrid model, called deep auto-encoder with the neural response (DAE-NR)
The DAE-NR incorporates the information from the visual cortex into ANNs to achieve better image reconstruction and higher neural representation similarity between biological and artificial neurons.
Our experiments demonstrate that if and only if with the joint learning, DAE-NRs can (i.e., improve the performance of image reconstruction) and (ii. increase the representational similarity between biological neurons and artificial neurons.
arXiv Detail & Related papers (2021-11-30T11:44:17Z) - Bio-inspired visual attention for silicon retinas based on spiking
neural networks applied to pattern classification [0.0]
Spiking Neural Networks (SNNs) represent an asynchronous type of artificial neural network closer to biology than traditional artificial networks.
We introduce a case study of event videos classification with SNNs, using a biology-grounded low-level computational attention mechanism.
arXiv Detail & Related papers (2021-05-31T07:34:13Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.