Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks
- URL: http://arxiv.org/abs/2003.11837v2
- Date: Wed, 4 Nov 2020 04:07:23 GMT
- Title: Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks
- Authors: Malu Zhang, Jiadong Wang, Burin Amornpaisannon, Zhixuan Zhang, VPK
Miriyala, Ammar Belatreche, Hong Qu, Jibin Wu, Yansong Chua, Trevor E.
Carlson and Haizhou Li
- Abstract summary: Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
- Score: 55.0627904986664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) use spatio-temporal spike patterns to
represent and transmit information, which is not only biologically realistic
but also suitable for ultra-low-power event-driven neuromorphic implementation.
Motivated by the success of deep learning, the study of Deep Spiking Neural
Networks (DeepSNNs) provides promising directions for artificial intelligence
applications. However, training of DeepSNNs is not straightforward because the
well-studied error back-propagation (BP) algorithm is not directly applicable.
In this paper, we first establish an understanding as to why error
back-propagation does not work well in DeepSNNs. To address this problem, we
propose a simple yet efficient Rectified Linear Postsynaptic Potential function
(ReL-PSP) for spiking neurons and propose a Spike-Timing-Dependent
Back-Propagation (STDBP) learning algorithm for DeepSNNs. In STDBP algorithm,
the timing of individual spikes is used to convey information (temporal
coding), and learning (back-propagation) is performed based on spike timing in
an event-driven manner. Our experimental results show that the proposed
learning algorithm achieves state-of-the-art classification accuracy in single
spike time based learning algorithms of DeepSNNs. Furthermore, by utilizing the
trained model parameters obtained from the proposed STDBP learning algorithm,
we demonstrate the ultra-low-power inference operations on a recently proposed
neuromorphic inference accelerator. Experimental results show that the
neuromorphic hardware consumes 0.751~mW of the total power consumption and
achieves a low latency of 47.71~ms to classify an image from the MNIST dataset.
Overall, this work investigates the contribution of spike timing dynamics to
information encoding, synaptic plasticity and decision making, providing a new
perspective to design of future DeepSNNs and neuromorphic hardware systems.
Related papers
- Advancing Spiking Neural Networks for Sequential Modeling with Central Pattern Generators [47.371024581669516]
Spiking neural networks (SNNs) represent a promising approach to developing artificial neural networks.
Applying SNNs to sequential tasks, such as text classification and time-series forecasting, has been hindered by the challenge of creating an effective and hardware-friendly spike-form positional encoding strategy.
We propose a novel PE technique for SNNs, termed CPG-PE. We demonstrate that the commonly used sinusoidal PE is mathematically a specific solution to the membrane potential dynamics of a particular CPG.
arXiv Detail & Related papers (2024-05-23T09:39:12Z) - Stochastic Spiking Neural Networks with First-to-Spike Coding [7.955633422160267]
Spiking Neural Networks (SNNs) are known for their bio-plausibility and energy efficiency.
In this work, we explore the merger of novel computing and information encoding schemes in SNN architectures.
We investigate the tradeoffs of our proposal in terms of accuracy, inference latency, spiking sparsity, energy consumption, and datasets.
arXiv Detail & Related papers (2024-04-26T22:52:23Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Backpropagation with Biologically Plausible Spatio-Temporal Adjustment
For Training Deep Spiking Neural Networks [5.484391472233163]
The success of deep learning is inseparable from backpropagation.
We propose a biological plausible spatial adjustment, which rethinks the relationship between membrane potential and spikes.
Secondly, we propose a biologically plausible temporal adjustment making the error propagate across the spikes in the temporal dimension.
arXiv Detail & Related papers (2021-10-17T15:55:51Z) - Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural
Networks [3.7384509727711923]
A major challenge for neuromorphic computing is that learning algorithms for traditional artificial neural networks (ANNs) do not transfer directly to spiking neural networks (SNNs)
In this article, we focus on the self-supervised learning problem of optical flow estimation from event-based camera inputs.
We show that the performance of the proposed ANNs and SNNs are on par with that of the current state-of-the-art ANNs trained in a self-supervised manner.
arXiv Detail & Related papers (2021-06-03T14:03:41Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Temporal Spike Sequence Learning via Backpropagation for Deep Spiking
Neural Networks [14.992756670960008]
Spiking neural networks (SNNs) are well suited for computation and implementations on energy-efficient event-driven neuromorphic processors.
We present a novel Temporal Spike Sequence Learning Backpropagation (TSSL-BP) method for training deep SNNs.
arXiv Detail & Related papers (2020-02-24T05:49:37Z) - Exploiting Neuron and Synapse Filter Dynamics in Spatial Temporal
Learning of Deep Spiking Neural Network [7.503685643036081]
A bio-plausible SNN model with spatial-temporal property is a complex dynamic system.
We formulate SNN as a network of infinite impulse response (IIR) filters with neuron nonlinearity.
We propose a training algorithm that is capable to learn spatial-temporal patterns by searching for the optimal synapse filter kernels and weights.
arXiv Detail & Related papers (2020-02-19T01:27:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.