Including STDP to eligibility propagation in multi-layer recurrent
spiking neural networks
- URL: http://arxiv.org/abs/2201.07602v1
- Date: Wed, 5 Jan 2022 05:51:18 GMT
- Title: Including STDP to eligibility propagation in multi-layer recurrent
spiking neural networks
- Authors: Werner van der Veen
- Abstract summary: Spiking neural networks (SNNs) in neuromorphic systems are more energy efficient compared to deep learning-based methods.
There is no clear competitive learning algorithm for training such SNNs.
E-prop offers an efficient and biologically plausible way to train competitive recurrent SNNs in low-power neuromorphic hardware.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking neural networks (SNNs) in neuromorphic systems are more energy
efficient compared to deep learning-based methods, but there is no clear
competitive learning algorithm for training such SNNs. Eligibility propagation
(e-prop) offers an efficient and biologically plausible way to train
competitive recurrent SNNs in low-power neuromorphic hardware. In this report,
previous performance of e-prop on a speech classification task is reproduced,
and the effects of including STDP-like behavior are analyzed. Including STDP to
the ALIF neuron model improves the classification performance, but this is not
the case for the Izhikevich e-prop neuron. Finally, it was found that e-prop
implemented in a single-layer recurrent SNN consistently outperforms a
multi-layer variant.
Related papers
- Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - Exploiting Heterogeneity in Timescales for Sparse Recurrent Spiking Neural Networks for Energy-Efficient Edge Computing [16.60622265961373]
Spiking Neural Networks (SNNs) represent the forefront of neuromorphic computing.
This paper weaves together three groundbreaking studies that revolutionize SNN performance.
arXiv Detail & Related papers (2024-07-08T23:33:12Z) - Benchmarking Spiking Neural Network Learning Methods with Varying
Locality [2.323924801314763]
Spiking Neural Networks (SNNs) provide more realistic neuronal dynamics.
Information is processed as spikes within SNNs in an event-based mechanism.
We show that training SNNs is challenging due to the non-differentiable nature of the spiking mechanism.
arXiv Detail & Related papers (2024-02-01T19:57:08Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Heterogeneous Recurrent Spiking Neural Network for Spatio-Temporal
Classification [13.521272923545409]
Spi Neural Networks are often touted as brain-inspired learning models for the third wave of Artificial Intelligence.
This paper presents a heterogeneous spiking neural network (HRSNN) with unsupervised learning for video recognition tasks.
We show that HRSNN can achieve similar performance to state-of-the-temporal backpropagation trained supervised SNN, but with less computation.
arXiv Detail & Related papers (2022-09-22T16:34:01Z) - A comparative study of back propagation and its alternatives on
multilayer perceptrons [0.0]
The de facto algorithm for training the back pass of a feedforward neural network is backpropagation (BP)
The use of almost-everywhere differentiable activation functions made it efficient and effective to propagate the gradient backwards through layers of deep neural networks.
In this paper, we analyze the stability and similarity of predictions and neurons in convolutional neural networks (CNNs) and propose a new variation of one of the algorithms.
arXiv Detail & Related papers (2022-05-31T18:44:13Z) - Accurate and efficient time-domain classification with adaptive spiking
recurrent neural networks [1.8515971640245998]
Spiking neural networks (SNNs) have been investigated as more biologically plausible and potentially more powerful models of neural computation.
We show how a novel surrogate gradient combined with recurrent networks of tunable and adaptive spiking neurons yields state-of-the-art for SNNs.
arXiv Detail & Related papers (2021-03-12T10:27:29Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.