On the Intrinsic Structures of Spiking Neural Networks
- URL: http://arxiv.org/abs/2207.04876v3
- Date: Thu, 16 Nov 2023 13:17:44 GMT
- Title: On the Intrinsic Structures of Spiking Neural Networks
- Authors: Shao-Qun Zhang, Jia-Yi Chen, Jin-Hui Wu, Gao Zhang, Huan Xiong, Bin
Gu, Zhi-Hua Zhou
- Abstract summary: Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
- Score: 66.57589494713515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have emerged a surge of interest in SNNs owing to their
remarkable potential to handle time-dependent and event-driven data. The
performance of SNNs hinges not only on selecting an apposite architecture and
fine-tuning connection weights, similar to conventional ANNs, but also on the
meticulous configuration of intrinsic structures within spiking computations.
However, there has been a dearth of comprehensive studies examining the impact
of intrinsic structures. Consequently, developers often find it challenging to
apply a standardized configuration of SNNs across diverse datasets or tasks.
This work delves deep into the intrinsic structures of SNNs. Initially, we
unveil two pivotal components of intrinsic structures: the integration
operation and firing-reset mechanism, by elucidating their influence on the
expressivity of SNNs. Furthermore, we draw two key conclusions: the membrane
time hyper-parameter is intimately linked to the eigenvalues of the integration
operation, dictating the functional topology of spiking dynamics, and various
hyper-parameters of the firing-reset mechanism govern the overall firing
capacity of an SNN, mitigating the injection ratio or sampling density of input
data. These findings elucidate why the efficacy of SNNs hinges heavily on the
configuration of intrinsic structures and lead to a recommendation that
enhancing the adaptability of these structures contributes to improving the
overall performance and applicability of SNNs. Inspired by this recognition, we
propose two feasible approaches to enhance SNN learning. These involve
leveraging self-connection architectures and employing stochastic spiking
neurons to augment the adaptability of the integration operation and
firing-reset mechanism, respectively. We verify the effectiveness of the
proposed methods from perspectives of theory and practice.
Related papers
- Q-SNNs: Quantized Spiking Neural Networks [12.719590949933105]
Spiking Neural Networks (SNNs) leverage sparse spikes to represent information and process them in an event-driven manner.
We introduce a lightweight and hardware-friendly Quantized SNN that applies quantization to both synaptic weights and membrane potentials.
We present a new Weight-Spike Dual Regulation (WS-DR) method inspired by information entropy theory.
arXiv Detail & Related papers (2024-06-19T16:23:26Z) - Understanding the Functional Roles of Modelling Components in Spiking Neural Networks [9.448298335007465]
Spiking neural networks (SNNs) are promising in achieving high computational efficiency with biological fidelity.
We investigate the functional roles of key modelling components, leakage, reset, and recurrence, in leaky integrate-and-fire (LIF) based SNNs.
Specifically, we find that the leakage plays a crucial role in balancing memory retention and robustness, the reset mechanism is essential for uninterrupted temporal processing and computational efficiency, and the recurrence enriches the capability to model complex dynamics at a cost of robustness degradation.
arXiv Detail & Related papers (2024-03-25T12:13:20Z) - Inherent Redundancy in Spiking Neural Networks [24.114844269113746]
Spiking Networks (SNNs) are a promising energy-efficient alternative to conventional artificial neural networks.
In this work, we focus on three key questions regarding inherent redundancy in SNNs.
We propose an Advance Attention (ASA) module to harness SNNs' redundancy.
arXiv Detail & Related papers (2023-08-16T08:58:25Z) - Disentangling Structured Components: Towards Adaptive, Interpretable and
Scalable Time Series Forecasting [52.47493322446537]
We develop a adaptive, interpretable and scalable forecasting framework, which seeks to individually model each component of the spatial-temporal patterns.
SCNN works with a pre-defined generative process of MTS, which arithmetically characterizes the latent structure of the spatial-temporal patterns.
Extensive experiments are conducted to demonstrate that SCNN can achieve superior performance over state-of-the-art models on three real-world datasets.
arXiv Detail & Related papers (2023-05-22T13:39:44Z) - Biologically inspired structure learning with reverse knowledge
distillation for spiking neural networks [19.33517163587031]
Spiking neural networks (SNNs) have superb characteristics in sensory information recognition tasks due to their biological plausibility.
The performance of some current spiking-based models is limited by their structures which means either fully connected or too-deep structures bring too much redundancy.
This paper proposes an evolutionary-based structure construction method for constructing more reasonable SNNs.
arXiv Detail & Related papers (2023-04-19T08:41:17Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - BackEISNN: A Deep Spiking Neural Network with Adaptive Self-Feedback and
Balanced Excitatory-Inhibitory Neurons [8.956708722109415]
Spiking neural networks (SNNs) transmit information through discrete spikes, which performs well in processing spatial-temporal information.
We propose a deep spiking neural network with adaptive self-feedback and balanced excitatory and inhibitory neurons (BackEISNN)
For the MNIST, FashionMNIST, and N-MNIST datasets, our model has achieved state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T08:38:31Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.