Motif-topology improved Spiking Neural Network for the Cocktail Party
Effect and McGurk Effect
- URL: http://arxiv.org/abs/2211.07641v1
- Date: Sat, 12 Nov 2022 08:23:55 GMT
- Title: Motif-topology improved Spiking Neural Network for the Cocktail Party
Effect and McGurk Effect
- Authors: Shuncheng Jia and Tielin Zhang and Ruichen Zuo and Bo Xu
- Abstract summary: Network architectures and learning principles are playing key in forming complex functions in artificial neural networks (ANNs) and spiking neural networks (SNNs)
Here, we propose a Motif-topology improved SNN (M-SNN) for the efficient multi-sensory integration and cognitive phenomenon simulations.
- Score: 5.63840716873115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Network architectures and learning principles are playing key in forming
complex functions in artificial neural networks (ANNs) and spiking neural
networks (SNNs). SNNs are considered the new-generation artificial networks by
incorporating more biological features than ANNs, including dynamic spiking
neurons, functionally specified architectures, and efficient learning
paradigms. Network architectures are also considered embodying the function of
the network. Here, we propose a Motif-topology improved SNN (M-SNN) for the
efficient multi-sensory integration and cognitive phenomenon simulations. The
cognitive phenomenon simulation we simulated includes the cocktail party effect
and McGurk effect, which are discussed by many researchers. Our M-SNN
constituted by the meta operator called network motifs. The source of 3-node
network motifs topology from artificial one pre-learned from the spatial or
temporal dataset. In the single-sensory classification task, the results showed
the accuracy of M-SNN using network motif topologies was higher than the pure
feedforward network topology without using them. In the multi-sensory
integration task, the performance of M-SNN using artificial network motif was
better than the state-of-the-art SNN using BRP (biologically-plausible reward
propagation). Furthermore, the M-SNN could better simulate the cocktail party
effect and McGurk effect with lower computational cost. We think the artificial
network motifs could be considered as some prior knowledge that would
contribute to the multi-sensory integration of SNNs and provide more benefits
for simulating the cognitive phenomenon.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural
Networks and Its Mapping Relationship to Deep Neural Networks [7.840247953745616]
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability.
This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2022-05-31T17:02:26Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Deep Learning in Spiking Phasor Neural Networks [0.6767885381740952]
Spiking Neural Networks (SNNs) have attracted the attention of the deep learning community for use in low-latency, low-power neuromorphic hardware.
In this paper, we introduce Spiking Phasor Neural Networks (SPNNs)
SPNNs are based on complex-valued Deep Neural Networks (DNNs), representing phases by spike times.
arXiv Detail & Related papers (2022-04-01T15:06:15Z) - Deep Reinforcement Learning Guided Graph Neural Networks for Brain
Network Analysis [61.53545734991802]
We propose a novel brain network representation framework, namely BN-GNN, which searches for the optimal GNN architecture for each brain network.
Our proposed BN-GNN improves the performance of traditional GNNs on different brain network analysis tasks.
arXiv Detail & Related papers (2022-03-18T07:05:27Z) - Motif-topology and Reward-learning improved Spiking Neural Network for
Efficient Multi-sensory Integration [5.161352821775507]
We propose a Motif-topology and Reward-learning improved spiking neural network (MR-SNN) for efficient multi-sensory integration.
The experimental results showed higher accuracy and stronger robustness of the proposed MR-SNN than other conventional SNNs without using Motifs.
The proposed reward learning paradigm was biologically plausible and can better explain the cognitive McGurk effect caused by incongruent visual and auditory sensory signals.
arXiv Detail & Related papers (2022-02-11T02:07:44Z) - Pruning of Deep Spiking Neural Networks through Gradient Rewiring [41.64961999525415]
Spiking Neural Networks (SNNs) have been attached great importance due to their biological plausibility and high energy-efficiency on neuromorphic chips.
Most existing methods directly apply pruning approaches in artificial neural networks (ANNs) to SNNs, which ignore the difference between ANNs and SNNs.
We propose gradient rewiring (Grad R), a joint learning algorithm of connectivity and weight for SNNs, that enables us to seamlessly optimize network structure without retrain.
arXiv Detail & Related papers (2021-05-11T10:05:53Z) - Accurate and efficient time-domain classification with adaptive spiking
recurrent neural networks [1.8515971640245998]
Spiking neural networks (SNNs) have been investigated as more biologically plausible and potentially more powerful models of neural computation.
We show how a novel surrogate gradient combined with recurrent networks of tunable and adaptive spiking neurons yields state-of-the-art for SNNs.
arXiv Detail & Related papers (2021-03-12T10:27:29Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.