Direct Training via Backpropagation for Ultra-low Latency Spiking Neural
Networks with Multi-threshold
- URL: http://arxiv.org/abs/2112.07426v1
- Date: Thu, 25 Nov 2021 07:04:28 GMT
- Title: Direct Training via Backpropagation for Ultra-low Latency Spiking Neural
Networks with Multi-threshold
- Authors: Changqing Xu, Yi Liu, and Yintang Yang
- Abstract summary: Spiking neural networks (SNNs) can utilizetemporal information and have a nature of energy efficiency.
We propose a novel training method based on backpropagation (BP) for ultra-low latency(1-2 timethreshold) SNN with multi-threshold model.
Our proposed method achieves an average accuracy of 99.56%, 93.08%, and 87.90% on MNIST, FashionMNIST, and CIFAR10, respectively with only 2 time steps.
- Score: 3.286515597773624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks (SNNs) can utilize spatio-temporal information and
have a nature of energy efficiency which is a good alternative to deep neural
networks(DNNs). The event-driven information processing makes SNNs can reduce
the expensive computation of DNNs and save a lot of energy consumption.
However, high training and inference latency is a limitation of the development
of deeper SNNs. SNNs usually need tens or even hundreds of time steps during
the training and inference process which causes not only the increase of
latency but also the waste of energy consumption. To overcome this problem, we
proposed a novel training method based on backpropagation (BP) for ultra-low
latency(1-2 time steps) SNN with multi-threshold. In order to increase the
information capacity of each spike, we introduce the multi-threshold Leaky
Integrate and Fired (LIF) model. In our proposed training method, we proposed
three approximated derivative for spike activity to solve the problem of the
non-differentiable issue which cause difficulties for direct training SNNs
based on BP. The experimental results show that our proposed method achieves an
average accuracy of 99.56%, 93.08%, and 87.90% on MNIST, FashionMNIST, and
CIFAR10, respectively with only 2 time steps. For the CIFAR10 dataset, our
proposed method achieve 1.12% accuracy improvement over the previously reported
direct trained SNNs with fewer time steps.
Related papers
- Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks [50.32980443749865]
Spiking neural networks (SNNs) have garnered significant attention for their low power consumption and high biologicalability.
Current SNNs struggle to balance accuracy and latency in neuromorphic datasets.
We propose Step-wise Distillation (HSD) method, tailored for neuromorphic datasets.
arXiv Detail & Related papers (2024-09-19T06:52:34Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - A noise based novel strategy for faster SNN training [0.0]
Spiking neural networks (SNNs) are receiving increasing attention due to their low power consumption and strong bio-plausibility.
Two main methods, artificial neural network (ANN)-to-SNN conversion and spike-based backpropagation (BP), both have their advantages and limitations.
We propose a novel SNN training approach that combines the benefits of the two methods.
arXiv Detail & Related papers (2022-11-10T09:59:04Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - One Timestep is All You Need: Training Spiking Neural Networks with
Ultra Low Latency [8.590196535871343]
Spiking Neural Networks (SNNs) are energy efficient alternatives to commonly used deep neural networks (DNNs)
High inference latency is a significant hindrance to the edge deployment of deep SNNs.
We propose an Iterative Initialization and Retraining method for SNNs (IIR-SNN) to perform single shot inference in the temporal axis.
arXiv Detail & Related papers (2021-10-01T22:54:59Z) - Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural
Networks [6.011954485684313]
Spiking Neural Networks (SNNs) are a promising alternative to traditional deep learning methods.
However, a major drawback of SNNs is high inference latency.
In this paper, we propose spatial and temporal pruning of SNNs.
arXiv Detail & Related papers (2021-04-26T12:50:58Z) - Deep Time Delay Neural Network for Speech Enhancement with Full Data
Learning [60.20150317299749]
This paper proposes a deep time delay neural network (TDNN) for speech enhancement with full data learning.
To make full use of the training data, we propose a full data learning method for speech enhancement.
arXiv Detail & Related papers (2020-11-11T06:32:37Z) - Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike
Timing Dependent Backpropagation [10.972663738092063]
Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes)
We present a computationally-efficient training technique for deep SNNs.
We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time steps, which is 10X faster compared to converted SNNs with similar accuracy.
arXiv Detail & Related papers (2020-05-04T19:30:43Z) - T2FSNN: Deep Spiking Neural Networks with Time-to-first-spike Coding [26.654533157221973]
This paper introduces the concept of time-to-first-spike coding into deep SNNs using the kernel-based dynamic threshold and dendrite to overcome the drawback.
According to our results, the proposed methods can reduce inference latency and number of spikes to 22% and less than 1%, compared to those of burst coding.
arXiv Detail & Related papers (2020-03-26T04:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.