Temporal Surrogate Back-propagation for Spiking Neural Networks
- URL: http://arxiv.org/abs/2011.09964v1
- Date: Wed, 18 Nov 2020 08:22:47 GMT
- Title: Temporal Surrogate Back-propagation for Spiking Neural Networks
- Authors: Yukun Yang
- Abstract summary: Spiking neural networks (SNN) are usually more energy-efficient as compared to Artificial neural networks (ANN)
Back-propagation (BP) has shown its strong power in training ANN in recent years.
However, since spike behavior is non-differentiable, BP cannot be applied to SNN directly.
- Score: 2.291640606078406
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Spiking neural networks (SNN) are usually more energy-efficient as compared
to Artificial neural networks (ANN), and the way they work has a great
similarity with our brain. Back-propagation (BP) has shown its strong power in
training ANN in recent years. However, since spike behavior is
non-differentiable, BP cannot be applied to SNN directly. Although prior works
demonstrated several ways to approximate the BP-gradient in both spatial and
temporal directions either through surrogate gradient or randomness, they
omitted the temporal dependency introduced by the reset mechanism between each
step. In this article, we target on theoretical completion and investigate the
effect of the missing term thoroughly. By adding the temporal dependency of the
reset mechanism, the new algorithm is more robust to learning-rate adjustments
on a toy dataset but does not show much improvement on larger learning tasks
like CIFAR-10. Empirically speaking, the benefits of the missing term are not
worth the additional computational overhead. In many cases, the missing term
can be ignored.
Related papers
- Temporal Spiking Neural Networks with Synaptic Delay for Graph Reasoning [91.29876772547348]
Spiking neural networks (SNNs) are investigated as biologically inspired models of neural computation.
This paper reveals that SNNs, when amalgamated with synaptic delay and temporal coding, are proficient in executing (knowledge) graph reasoning.
arXiv Detail & Related papers (2024-05-27T05:53:30Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Exact Gradient Computation for Spiking Neural Networks Through Forward
Propagation [39.33537954568678]
Spiking neural networks (SNN) have emerged as alternatives to traditional neural networks.
We propose a novel training algorithm, called emphforward propagation (FP), that computes exact gradients for SNN.
arXiv Detail & Related papers (2022-10-18T20:28:21Z) - Spiking Neural Networks for event-based action recognition: A new task to understand their advantage [1.4348901037145936]
Spiking Neural Networks (SNNs) are characterised by their unique temporal dynamics.
We show how Spiking neurons can enable temporal feature extraction in feed-forward neural networks.
We also show how recurrent SNNs can achieve comparable results to LSTM with a smaller number of parameters.
arXiv Detail & Related papers (2022-09-29T16:22:46Z) - Temporal Efficient Training of Spiking Neural Network via Gradient
Re-weighting [29.685909045226847]
Brain-inspired spiking neuron networks (SNNs) have attracted widespread research interest because of their event-driven and energy-efficient characteristics.
Current direct training approach with surrogate gradient results in SNNs with poor generalizability.
We introduce the temporal efficient training (TET) approach to compensate for the loss of momentum in the gradient descent with SG.
arXiv Detail & Related papers (2022-02-24T08:02:37Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Backpropagation with Biologically Plausible Spatio-Temporal Adjustment
For Training Deep Spiking Neural Networks [5.484391472233163]
The success of deep learning is inseparable from backpropagation.
We propose a biological plausible spatial adjustment, which rethinks the relationship between membrane potential and spikes.
Secondly, we propose a biologically plausible temporal adjustment making the error propagate across the spikes in the temporal dimension.
arXiv Detail & Related papers (2021-10-17T15:55:51Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Temporal Spike Sequence Learning via Backpropagation for Deep Spiking
Neural Networks [14.992756670960008]
Spiking neural networks (SNNs) are well suited for computation and implementations on energy-efficient event-driven neuromorphic processors.
We present a novel Temporal Spike Sequence Learning Backpropagation (TSSL-BP) method for training deep SNNs.
arXiv Detail & Related papers (2020-02-24T05:49:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.