Backpropagation with Biologically Plausible Spatio-Temporal Adjustment
For Training Deep Spiking Neural Networks
- URL: http://arxiv.org/abs/2110.08858v1
- Date: Sun, 17 Oct 2021 15:55:51 GMT
- Title: Backpropagation with Biologically Plausible Spatio-Temporal Adjustment
For Training Deep Spiking Neural Networks
- Authors: Guobin Shen, Dongcheng Zhao and Yi Zeng
- Abstract summary: The success of deep learning is inseparable from backpropagation.
We propose a biological plausible spatial adjustment, which rethinks the relationship between membrane potential and spikes.
Secondly, we propose a biologically plausible temporal adjustment making the error propagate across the spikes in the temporal dimension.
- Score: 5.484391472233163
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The spiking neural network (SNN) mimics the information processing operation
in the human brain, represents and transmits information in spike trains
containing wealthy spatial and temporal information, and shows superior
performance on many cognitive tasks. In addition, the event-driven information
processing enables the energy-efficient implementation on neuromorphic chips.
The success of deep learning is inseparable from backpropagation. Due to the
discrete information transmission, directly applying the backpropagation to the
training of the SNN still has a performance gap compared with the traditional
deep neural networks. Also, a large simulation time is required to achieve
better performance, which results in high latency. To address the problems, we
propose a biological plausible spatial adjustment, which rethinks the
relationship between membrane potential and spikes and realizes a reasonable
adjustment of gradients to different time steps. And it precisely controls the
backpropagation of the error along the spatial dimension. Secondly, we propose
a biologically plausible temporal adjustment making the error propagate across
the spikes in the temporal dimension, which overcomes the problem of the
temporal dependency within a single spike period of the traditional spiking
neurons. We have verified our algorithm on several datasets, and the
experimental results have shown that our algorithm greatly reduces the network
latency and energy consumption while also improving network performance. We
have achieved state-of-the-art performance on the neuromorphic datasets
N-MNIST, DVS-Gesture, and DVS-CIFAR10. For the static datasets MNIST and
CIFAR10, we have surpassed most of the traditional SNN backpropagation training
algorithm and achieved relatively superior performance.
Related papers
- Zero-Shot Temporal Resolution Domain Adaptation for Spiking Neural Networks [3.2366933261812076]
Spiking Neural Networks (SNNs) are biologically-inspired deep neural networks that efficiently extract temporal information.
SNN model parameters are sensitive to temporal resolution, leading to significant performance drops when the temporal resolution of target data at the edge is not the same.
We propose three novel domain adaptation methods for adapting neuron parameters to account for the change in time resolution without re-training on target time-resolution.
arXiv Detail & Related papers (2024-11-07T14:58:51Z) - Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks [50.32980443749865]
Spiking neural networks (SNNs) have garnered significant attention for their low power consumption and high biologicalability.
Current SNNs struggle to balance accuracy and latency in neuromorphic datasets.
We propose Step-wise Distillation (HSD) method, tailored for neuromorphic datasets.
arXiv Detail & Related papers (2024-09-19T06:52:34Z) - Efficient and Effective Time-Series Forecasting with Spiking Neural Networks [47.371024581669516]
Spiking neural networks (SNNs) provide a unique pathway for capturing the intricacies of temporal data.
Applying SNNs to time-series forecasting is challenging due to difficulties in effective temporal alignment, complexities in encoding processes, and the absence of standardized guidelines for model selection.
We propose a framework for SNNs in time-series forecasting tasks, leveraging the efficiency of spiking neurons in processing temporal information.
arXiv Detail & Related papers (2024-02-02T16:23:50Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Improving Stability and Performance of Spiking Neural Networks through
Enhancing Temporal Consistency [9.545711665562715]
Spiking neural networks have gained significant attention due to their brain-like information processing capabilities.
Current training algorithms tend to overlook the differences in output distribution at various timesteps.
We have designed a method to enhance the temporal consistency of outputs at different timesteps.
arXiv Detail & Related papers (2023-05-23T15:50:07Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural
Networks [3.7384509727711923]
A major challenge for neuromorphic computing is that learning algorithms for traditional artificial neural networks (ANNs) do not transfer directly to spiking neural networks (SNNs)
In this article, we focus on the self-supervised learning problem of optical flow estimation from event-based camera inputs.
We show that the performance of the proposed ANNs and SNNs are on par with that of the current state-of-the-art ANNs trained in a self-supervised manner.
arXiv Detail & Related papers (2021-06-03T14:03:41Z) - BackEISNN: A Deep Spiking Neural Network with Adaptive Self-Feedback and
Balanced Excitatory-Inhibitory Neurons [8.956708722109415]
Spiking neural networks (SNNs) transmit information through discrete spikes, which performs well in processing spatial-temporal information.
We propose a deep spiking neural network with adaptive self-feedback and balanced excitatory and inhibitory neurons (BackEISNN)
For the MNIST, FashionMNIST, and N-MNIST datasets, our model has achieved state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T08:38:31Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.