SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks
- URL: http://arxiv.org/abs/2302.00232v1
- Date: Wed, 1 Feb 2023 04:22:59 GMT
- Title: SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks
- Authors: Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Yisen Wang, Zhouchen Lin
- Abstract summary: Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
- Score: 56.35403810762512
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Spiking neural networks (SNNs) with event-based computation are promising
brain-inspired models for energy-efficient applications on neuromorphic
hardware. However, most supervised SNN training methods, such as conversion
from artificial neural networks or direct training with surrogate gradients,
require complex computation rather than spike-based operations of spiking
neurons during training. In this paper, we study spike-based implicit
differentiation on the equilibrium state (SPIDE) that extends the recently
proposed training method, implicit differentiation on the equilibrium state
(IDE), for supervised learning with purely spike-based computation, which
demonstrates the potential for energy-efficient training of SNNs. Specifically,
we introduce ternary spiking neuron couples and prove that implicit
differentiation can be solved by spikes based on this design, so the whole
training procedure, including both forward and backward passes, is made as
event-driven spike computation, and weights are updated locally with two-stage
average firing rates. Then we propose to modify the reset membrane potential to
reduce the approximation error of spikes. With these key components, we can
train SNNs with flexible structures in a small number of time steps and with
firing sparsity during training, and the theoretical estimation of energy costs
demonstrates the potential for high efficiency. Meanwhile, experiments show
that even with these constraints, our trained models can still achieve
competitive results on MNIST, CIFAR-10, CIFAR-100, and CIFAR10-DVS. Our code is
available at https://github.com/pkuxmq/SPIDE-FSNN.
Related papers
- Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - Accelerating spiking neural network training [1.6114012813668934]
Spiking neural networks (SNN) are a type of artificial network inspired by the use of action potentials in the brain.
We propose a new technique for directly training single-spike-per-neur-on SNNs which eliminates all sequential computation and relies exclusively on vectorised operations.
Our proposed solution manages to solve certain tasks with over a $95.68 %$ reduction in spike counts relative to a conventionally trained SNN, which could significantly reduce energy requirements when deployed on neuromorphic computers.
arXiv Detail & Related papers (2022-05-30T17:48:14Z) - Learning in Feedback-driven Recurrent Spiking Neural Networks using
full-FORCE Training [4.124948554183487]
We propose a supervised training procedure for RSNNs, where a second network is introduced only during the training.
The proposed training procedure consists of generating targets for both recurrent and readout layers.
We demonstrate the improved performance and noise robustness of the proposed full-FORCE training procedure to model 8 dynamical systems.
arXiv Detail & Related papers (2022-05-26T19:01:19Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike
Timing Dependent Backpropagation [10.972663738092063]
Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes)
We present a computationally-efficient training technique for deep SNNs.
We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time steps, which is 10X faster compared to converted SNNs with similar accuracy.
arXiv Detail & Related papers (2020-05-04T19:30:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.