BioGrad: Biologically Plausible Gradient-Based Learning for Spiking
Neural Networks
- URL: http://arxiv.org/abs/2110.14092v1
- Date: Wed, 27 Oct 2021 00:07:25 GMT
- Title: BioGrad: Biologically Plausible Gradient-Based Learning for Spiking
Neural Networks
- Authors: Guangzhi Tang, Neelesh Kumar, Ioannis Polykretis, Konstantinos P.
Michmizos
- Abstract summary: Spiking neural networks (SNN) are delivering energy-efficient, massively parallel, and low-latency solutions to AI problems.
To harness these computational benefits, SNN need to be trained by learning algorithms that adhere to brain-inspired neuromorphic principles.
We propose a biologically plausible gradient-based learning algorithm for SNN that is functionally equivalent to backprop.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking neural networks (SNN) are delivering energy-efficient, massively
parallel, and low-latency solutions to AI problems, facilitated by the emerging
neuromorphic chips. To harness these computational benefits, SNN need to be
trained by learning algorithms that adhere to brain-inspired neuromorphic
principles, namely event-based, local, and online computations. Yet, the
state-of-the-art SNN training algorithms are based on backprop that does not
follow the above principles. Due to its limited biological plausibility, the
application of backprop to SNN requires non-local feedback pathways for
transmitting continuous-valued errors, and relies on gradients from future
timesteps. The introduction of biologically plausible modifications to backprop
has helped overcome several of its limitations, but limits the degree to which
backprop is approximated, which hinders its performance. We propose a
biologically plausible gradient-based learning algorithm for SNN that is
functionally equivalent to backprop, while adhering to all three neuromorphic
principles. We introduced multi-compartment spiking neurons with local
eligibility traces to compute the gradients required for learning, and a
periodic "sleep" phase to further improve the approximation to backprop during
which a local Hebbian rule aligns the feedback and feedforward weights. Our
method achieved the same level of performance as backprop with multi-layer
fully connected SNN on MNIST (98.13%) and the event-based N-MNIST (97.59%)
datasets. We deployed our learning algorithm on Intel's Loihi to train a
1-hidden-layer network for MNIST, and obtained 93.32% test accuracy while
consuming 400 times less energy per training sample than BioGrad on GPU. Our
work shows that optimal learning is feasible in neuromorphic computing, and
further pursuing its biological plausibility can better capture the benefits of
this emerging computing paradigm.
Related papers
- STOP: Spatiotemporal Orthogonal Propagation for Weight-Threshold-Leakage Synergistic Training of Deep Spiking Neural Networks [11.85044871205734]
Deep neural network (SNN) models based on sparsely sparse binary activations lack efficient and high-accuracy SNN deep learning algorithms.
Our algorithm enables fully synergistic learning algorithm firing synaptic weights as well as thresholds and spiking factors in neurons to improve SNN accuracy.
Under a unified temporally-forward trace-based framework, we mitigate the huge memory requirement for storing neural states of all time-steps in the forward pass.
Our method is more plausible for edge intelligent scenarios where resources are limited but high-accuracy in-situ learning is desired.
arXiv Detail & Related papers (2024-11-17T14:15:54Z) - BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation [20.34272550256856]
Spiking neural networks (SNNs) mimic biological neural system to convey information via discrete spikes.
Our work achieves state-of-the-art performance for training SNNs on both static and neuromorphic datasets.
arXiv Detail & Related papers (2024-07-12T08:17:24Z) - High-performance deep spiking neural networks with 0.3 spikes per neuron [9.01407445068455]
It is hard to train biologically-inspired spiking neural networks (SNNs) than artificial neural networks (ANNs)
We show that training deep SNN models achieves the exact same performance as that of ANNs.
Our SNN accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation.
arXiv Detail & Related papers (2023-06-14T21:01:35Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - PC-SNN: Supervised Learning with Local Hebbian Synaptic Plasticity based
on Predictive Coding in Spiking Neural Networks [1.6172800007896282]
We propose a novel learning algorithm inspired by predictive coding theory.
We show that it can perform supervised learning fully autonomously and successfully as the backprop.
This method achieves a favorable performance compared to the state-of-the-art multi-layer SNNs.
arXiv Detail & Related papers (2022-11-24T09:56:02Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Exact Gradient Computation for Spiking Neural Networks Through Forward
Propagation [39.33537954568678]
Spiking neural networks (SNN) have emerged as alternatives to traditional neural networks.
We propose a novel training algorithm, called emphforward propagation (FP), that computes exact gradients for SNN.
arXiv Detail & Related papers (2022-10-18T20:28:21Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.