Deep Spiking Neural Networks with High Representation Similarity Model
Visual Pathways of Macaque and Mouse
- URL: http://arxiv.org/abs/2303.06060v5
- Date: Mon, 22 May 2023 04:03:46 GMT
- Title: Deep Spiking Neural Networks with High Representation Similarity Model
Visual Pathways of Macaque and Mouse
- Authors: Liwei Huang, Zhengyu Ma, Liutao Yu, Huihui Zhou, Yonghong Tian
- Abstract summary: Spiking Neural Networks (SNNs) are more biologically plausible models since spiking neurons encode information with time sequences of spikes.
In this study, we model the visual cortex with deep SNNs for the first time, and also with a wide range of state-of-the-art deep CNNs and ViTs for comparison.
Almost all similarity scores of SNNs are higher than their counterparts of CNNs with an average of 6.6%.
- Score: 17.545204435882816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep artificial neural networks (ANNs) play a major role in modeling the
visual pathways of primate and rodent. However, they highly simplify the
computational properties of neurons compared to their biological counterparts.
Instead, Spiking Neural Networks (SNNs) are more biologically plausible models
since spiking neurons encode information with time sequences of spikes, just
like biological neurons do. However, there is a lack of studies on visual
pathways with deep SNNs models. In this study, we model the visual cortex with
deep SNNs for the first time, and also with a wide range of state-of-the-art
deep CNNs and ViTs for comparison. Using three similarity metrics, we conduct
neural representation similarity experiments on three neural datasets collected
from two species under three types of stimuli. Based on extensive similarity
analyses, we further investigate the functional hierarchy and mechanisms across
species. Almost all similarity scores of SNNs are higher than their
counterparts of CNNs with an average of 6.6%. Depths of the layers with the
highest similarity scores exhibit little differences across mouse cortical
regions, but vary significantly across macaque regions, suggesting that the
visual processing structure of mice is more regionally homogeneous than that of
macaques. Besides, the multi-branch structures observed in some top mouse
brain-like neural networks provide computational evidence of parallel
processing streams in mice, and the different performance in fitting macaque
neural representations under different stimuli exhibits the functional
specialization of information processing in macaques. Taken together, our study
demonstrates that SNNs could serve as promising candidates to better model and
explain the functional hierarchy and mechanisms of the visual system.
Related papers
- Category-Selective Neurons in Deep Networks: Comparing Purely Visual and Visual-Language Models [23.309064032922507]
Category-selective regions in the human brain play a crucial role in high-level visual processing.
We investigate whether artificial neural networks (ANNs) exhibit similar category-selective neurons.
Our study provides insights into how ANNs mirror biological vision and how multimodal learning influences category-selective representations.
arXiv Detail & Related papers (2025-02-23T06:15:51Z) - Digit Recognition using Multimodal Spiking Neural Networks [3.046906600991174]
Spiking neural networks (SNNs) are the third generation of neural networks that are biologically inspired to process data.
SNNs are used to process event-based data due to their neuromorphic nature.
arXiv Detail & Related papers (2024-08-31T22:27:40Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Unveiling the Unseen: Identifiable Clusters in Trained Depthwise
Convolutional Kernels [56.69755544814834]
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures.
This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers.
arXiv Detail & Related papers (2024-01-25T19:05:53Z) - Connecting metrics for shape-texture knowledge in computer vision [1.7785095623975342]
Deep neural networks remain brittle and susceptible to many changes in the image that do not cause humans to misclassify images.
Part of this different behavior may be explained by the type of features humans and deep neural networks use in vision tasks.
arXiv Detail & Related papers (2023-01-25T14:37:42Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Similarity and Matching of Neural Network Representations [0.0]
We employ a toolset -- dubbed Dr. Frankenstein -- to analyse the similarity of representations in deep neural networks.
We aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer.
arXiv Detail & Related papers (2021-10-27T17:59:46Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.