Benchmarking Spiking Neural Network Learning Methods with Varying
Locality
- URL: http://arxiv.org/abs/2402.01782v1
- Date: Thu, 1 Feb 2024 19:57:08 GMT
- Title: Benchmarking Spiking Neural Network Learning Methods with Varying
Locality
- Authors: Jiaqi Lin, Sen Lu, Malyaban Bal, Abhronil Sengupta
- Abstract summary: Spiking Neural Networks (SNNs) provide more realistic neuronal dynamics.
Information is processed as spikes within SNNs in an event-based mechanism.
We show that training SNNs is challenging due to the non-differentiable nature of the spiking mechanism.
- Score: 2.323924801314763
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking Neural Networks (SNNs), providing more realistic neuronal dynamics,
have shown to achieve performance comparable to Artificial Neural Networks
(ANNs) in several machine learning tasks. Information is processed as spikes
within SNNs in an event-based mechanism that significantly reduces energy
consumption. However, training SNNs is challenging due to the
non-differentiable nature of the spiking mechanism. Traditional approaches,
such as Backpropagation Through Time (BPTT), have shown effectiveness but comes
with additional computational and memory costs and are biologically
implausible. In contrast, recent works propose alternative learning methods
with varying degrees of locality, demonstrating success in classification
tasks. In this work, we show that these methods share similarities during the
training process, while they present a trade-off between biological
plausibility and performance. Further, this research examines the implicitly
recurrent nature of SNNs and investigates the influence of addition of explicit
recurrence to SNNs. We experimentally prove that the addition of explicit
recurrent weights enhances the robustness of SNNs. We also investigate the
performance of local learning methods under gradient and non-gradient based
adversarial attacks.
Related papers
- Training Spiking Neural Networks via Augmented Direct Feedback Alignment [3.798885293742468]
Spiking neural networks (SNNs) are promising solutions for implementing neural networks in neuromorphic devices.
However, the nondifferentiable nature of SNN neurons makes it a challenge to train them.
In this paper, we propose using augmented direct feedback alignment (aDFA), a gradient-free approach based on random projection, to train SNNs.
arXiv Detail & Related papers (2024-09-12T06:22:44Z) - Exploiting Heterogeneity in Timescales for Sparse Recurrent Spiking Neural Networks for Energy-Efficient Edge Computing [16.60622265961373]
Spiking Neural Networks (SNNs) represent the forefront of neuromorphic computing.
This paper weaves together three groundbreaking studies that revolutionize SNN performance.
arXiv Detail & Related papers (2024-07-08T23:33:12Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Ensemble plasticity and network adaptability in SNNs [0.726437825413781]
Artificial Spiking Neural Networks (ASNNs) promise greater information processing efficiency because of discrete event-based (i.e., spike) computation.
We introduce a novel ensemble learning method based on entropy and network activation, operated exclusively using spiking activity.
It was discovered that pruning lower spike-rate neuron clusters resulted in increased generalization or a predictable decline in performance.
arXiv Detail & Related papers (2022-03-11T01:14:51Z) - Including STDP to eligibility propagation in multi-layer recurrent
spiking neural networks [0.0]
Spiking neural networks (SNNs) in neuromorphic systems are more energy efficient compared to deep learning-based methods.
There is no clear competitive learning algorithm for training such SNNs.
E-prop offers an efficient and biologically plausible way to train competitive recurrent SNNs in low-power neuromorphic hardware.
arXiv Detail & Related papers (2022-01-05T05:51:18Z) - Accurate and efficient time-domain classification with adaptive spiking
recurrent neural networks [1.8515971640245998]
Spiking neural networks (SNNs) have been investigated as more biologically plausible and potentially more powerful models of neural computation.
We show how a novel surrogate gradient combined with recurrent networks of tunable and adaptive spiking neurons yields state-of-the-art for SNNs.
arXiv Detail & Related papers (2021-03-12T10:27:29Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.