Event-Based Backpropagation can compute Exact Gradients for Spiking
Neural Networks
- URL: http://arxiv.org/abs/2009.08378v3
- Date: Mon, 31 May 2021 18:00:07 GMT
- Title: Event-Based Backpropagation can compute Exact Gradients for Spiking
Neural Networks
- Authors: Timo C. Wunderlich, Christian Pehle
- Abstract summary: Spiking neural networks combine analog computation with event-based communication using discrete spikes.
For the first time, this work derives the backpropagation algorithm for a continuous-time spiking neural network and a general loss function.
We use gradients computed via EventProp to train networks on the Yin-Yang and MNIST datasets using either a spike time or voltage based loss function and report competitive performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks combine analog computation with event-based
communication using discrete spikes. While the impressive advances of deep
learning are enabled by training non-spiking artificial neural networks using
the backpropagation algorithm, applying this algorithm to spiking networks was
previously hindered by the existence of discrete spike events and
discontinuities. For the first time, this work derives the backpropagation
algorithm for a continuous-time spiking neural network and a general loss
function by applying the adjoint method together with the proper partial
derivative jumps, allowing for backpropagation through discrete spike events
without approximations. This algorithm, EventProp, backpropagates errors at
spike times in order to compute the exact gradient in an event-based,
temporally and spatially sparse fashion. We use gradients computed via
EventProp to train networks on the Yin-Yang and MNIST datasets using either a
spike time or voltage based loss function and report competitive performance.
Our work supports the rigorous study of gradient-based learning algorithms in
spiking neural networks and provides insights toward their implementation in
novel brain-inspired hardware.
Related papers
- Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Loss shaping enhances exact gradient learning with EventProp in Spiking Neural Networks [0.1350479308585481]
Eventprop is an algorithm for gradient descent on exact gradients in spiking neural networks.
We implement Eventprop in the GPU-enhanced Neural Networks framework.
Networks achieve state-of-the-art performance on Spiking Heidelberg Digits and good accuracy on Spiking Speech Commands.
arXiv Detail & Related papers (2022-12-02T15:20:58Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Proximal Mean Field Learning in Shallow Neural Networks [0.4972323953932129]
We propose a custom learning algorithm for shallow neural networks with single hidden layer having infinite width.
We realize mean field learning as a computational algorithm, rather than as an analytical tool.
Our algorithm performs gradient descent of the free energy associated with the risk functional.
arXiv Detail & Related papers (2022-10-25T10:06:26Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Analytically Tractable Inference in Deep Neural Networks [0.0]
Tractable Approximate Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks.
We are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures.
arXiv Detail & Related papers (2021-03-09T14:51:34Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - Supervised Learning in Temporally-Coded Spiking Neural Networks with
Approximate Backpropagation [0.021506382989223777]
We propose a new supervised learning method for temporally-encoded multilayer spiking networks to perform classification.
The method employs a reinforcement signal that mimics backpropagation but is far less computationally intensive.
In simulated MNIST handwritten digit classification, two-layer networks trained with this rule matched the performance of a comparable backpropagation based non-spiking network.
arXiv Detail & Related papers (2020-07-27T03:39:49Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Explicitly Trained Spiking Sparsity in Spiking Neural Networks with
Backpropagation [7.952659059689134]
Spiking Neural Networks (SNNs) are being explored for their potential energy efficiency resulting from sparse, event-driven computations.
We propose an explicit inclusion of spike counts in the loss function, along with a traditional error loss, to optimize weight parameters for both accuracy and spiking sparsity.
arXiv Detail & Related papers (2020-03-02T23:39:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.