Deep recurrent spiking neural networks capture both static and dynamic
representations of the visual cortex under movie stimuli
- URL: http://arxiv.org/abs/2306.01354v1
- Date: Fri, 2 Jun 2023 08:25:58 GMT
- Title: Deep recurrent spiking neural networks capture both static and dynamic
representations of the visual cortex under movie stimuli
- Authors: Liwei Huang, ZhengYu Ma, Huihui Zhou, Yonghong Tian
- Abstract summary: In the real world, visual stimuli received by the biological visual system are predominantly dynamic rather than static.
In this work, we apply deep recurrent SNNs to model the mouse visual cortex under movie stimuli.
We establish that these networks are competent to capture both static and dynamic representations.
- Score: 19.166875407309263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the real world, visual stimuli received by the biological visual system
are predominantly dynamic rather than static. A better understanding of how the
visual cortex represents movie stimuli could provide deeper insight into the
information processing mechanisms of the visual system. Although some progress
has been made in modeling neural responses to natural movies with deep neural
networks, the visual representations of static and dynamic information under
such time-series visual stimuli remain to be further explored. In this work,
considering abundant recurrent connections in the mouse visual system, we
design a recurrent module based on the hierarchy of the mouse cortex and add it
into Deep Spiking Neural Networks, which have been demonstrated to be a more
compelling computational model for the visual cortex. Using Time-Series
Representational Similarity Analysis, we measure the representational
similarity between networks and mouse cortical regions under natural movie
stimuli. Subsequently, we conduct a comparison of the representational
similarity across recurrent/feedforward networks and image/video training
tasks. Trained on the video action recognition task, recurrent SNN achieves the
highest representational similarity and significantly outperforms feedforward
SNN trained on the same task by 15% and the recurrent SNN trained on the image
classification task by 8%. We investigate how static and dynamic
representations of SNNs influence the similarity, as a way to explain the
importance of these two forms of representations in biological neural coding.
Taken together, our work is the first to apply deep recurrent SNNs to model the
mouse visual cortex under movie stimuli and we establish that these networks
are competent to capture both static and dynamic representations and make
contributions to understanding the movie information processing mechanisms of
the visual cortex.
Related papers
- Mice to Machines: Neural Representations from Visual Cortex for Domain Generalization [0.0]
We investigate the functional alignment between the mouse visual cortex and deep learning models for object classification tasks.<n>Our work proposes a novel framework for comparing the functional architecture of the mouse visual cortex with deep learning models.<n>Our findings carry broad implications for the development of advanced AI models that draw inspiration from the mouse visual cortex.
arXiv Detail & Related papers (2025-05-11T07:37:37Z) - Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.
In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Time-Dependent VAE for Building Latent Representations from Visual Neural Activity with Complex Dynamics [25.454851828755054]
TiDeSPL-VAE can effectively analyze complex visual neural activity and model temporal relationships in a natural way.
Results show that our model not only yields the best decoding performance on naturalistic scenes/movies but also extracts explicit neural dynamics.
arXiv Detail & Related papers (2024-08-15T03:27:23Z) - Aligning Neuronal Coding of Dynamic Visual Scenes with Foundation Vision Models [2.790870674964473]
We propose Vi-ST, atemporal convolutional neural network fed with a self-supervised Vision Transformer (ViT)
Our proposed Vi-ST demonstrates a novel modeling framework for neuronal coding of dynamic visual scenes in the brain.
arXiv Detail & Related papers (2024-07-15T14:06:13Z) - The Dynamic Net Architecture: Learning Robust and Holistic Visual Representations Through Self-Organizing Networks [3.9848584845601014]
We present a novel intelligent-system architecture called "Dynamic Net Architecture" (DNA)
DNA relies on recurrence-stabilized networks and discuss it in application to vision.
arXiv Detail & Related papers (2024-07-08T06:22:10Z) - Brain-like representational straightening of natural movies in robust
feedforward neural networks [2.8749107965043286]
Representational straightening refers to a decrease in curvature of visual feature representations of a sequence of frames taken from natural movies.
We show robustness to noise in the input image can produce representational straightening in feedforward neural networks.
arXiv Detail & Related papers (2023-08-26T13:04:36Z) - Controllable Mind Visual Diffusion Model [58.83896307930354]
Brain signal visualization has emerged as an active research area, serving as a critical interface between the human visual system and computer vision models.
We propose a novel approach, referred to as Controllable Mind Visual Model Diffusion (CMVDM)
CMVDM extracts semantic and silhouette information from fMRI data using attribute alignment and assistant networks.
We then leverage a control model to fully exploit the extracted information for image synthesis, resulting in generated images that closely resemble the visual stimuli in terms of semantics and silhouette.
arXiv Detail & Related papers (2023-05-17T11:36:40Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Neural Implicit Representations for Physical Parameter Inference from a Single Video [49.766574469284485]
We propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) for modelling physical phenomena.
Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video.
The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic images.
arXiv Detail & Related papers (2022-04-29T11:55:35Z) - Drop, Swap, and Generate: A Self-Supervised Approach for Generating
Neural Activity [33.06823702945747]
We introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE.
Our approach combines a generative modeling framework with an instance-specific alignment loss.
We show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
arXiv Detail & Related papers (2021-11-03T16:39:43Z) - Bio-inspired visual attention for silicon retinas based on spiking
neural networks applied to pattern classification [0.0]
Spiking Neural Networks (SNNs) represent an asynchronous type of artificial neural network closer to biology than traditional artificial networks.
We introduce a case study of event videos classification with SNNs, using a biology-grounded low-level computational attention mechanism.
arXiv Detail & Related papers (2021-05-31T07:34:13Z) - Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes [70.76742458931935]
We introduce a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion.
Our representation is optimized through a neural network to fit the observed input views.
We show that our representation can be used for complex dynamic scenes, including thin structures, view-dependent effects, and natural degrees of motion.
arXiv Detail & Related papers (2020-11-26T01:23:44Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.