Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep
Spiking Neural Networks
- URL: http://arxiv.org/abs/2105.11654v1
- Date: Tue, 25 May 2021 04:15:06 GMT
- Title: Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep
Spiking Neural Networks
- Authors: Jianhao Ding, Zhaofei Yu, Yonghong Tian and Tiejun Huang
- Abstract summary: Spiking Neural Networks (SNNs) are bio-inspired energy-efficient neural networks.
In this paper, we theoretically analyze ANN-SNN conversion and derive sufficient conditions of the optimal conversion.
We show that the proposed method achieves near loss less conversion with VGG-16, PreActResNet-18, and deeper structures.
- Score: 43.046402416604245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs), as bio-inspired energy-efficient neural
networks, have attracted great attentions from researchers and industry. The
most efficient way to train deep SNNs is through ANN-SNN conversion. However,
the conversion usually suffers from accuracy loss and long inference time,
which impede the practical application of SNN. In this paper, we theoretically
analyze ANN-SNN conversion and derive sufficient conditions of the optimal
conversion. To better correlate ANN-SNN and get greater accuracy, we propose
Rate Norm Layer to replace the ReLU activation function in source ANN training,
enabling direct conversion from a trained ANN to an SNN. Moreover, we propose
an optimal fit curve to quantify the fit between the activation value of source
ANN and the actual firing rate of target SNN. We show that the inference time
can be reduced by optimizing the upper bound of the fit curve in the revised
ANN to achieve fast inference. Our theory can explain the existing work on fast
reasoning and get better results. The experimental results show that the
proposed method achieves near loss less conversion with VGG-16,
PreActResNet-18, and deeper structures. Moreover, it can reach 8.6x faster
reasoning performance under 0.265x energy consumption of the typical method.
The code is available at
https://github.com/DingJianhao/OptSNNConvertion-RNL-RIL.
Related papers
- One-Spike SNN: Single-Spike Phase Coding with Base Manipulation for ANN-to-SNN Conversion Loss Minimization [0.41436032949434404]
As spiking neural networks (SNNs) are event-driven, energy efficiency is higher than conventional artificial neural networks (ANNs)
In this work, we propose a single-spike phase coding as an encoding scheme that minimizes the number of spikes to transfer data between SNN layers.
Without any additional retraining or architectural constraints on ANNs, the proposed conversion method does not lose inference accuracy (0.58% on average) verified on three convolutional neural networks (CNNs) with CIFAR and ImageNet datasets.
arXiv Detail & Related papers (2024-01-30T02:00:28Z) - LaSNN: Layer-wise ANN-to-SNN Distillation for Effective and Efficient
Training in Deep Spiking Neural Networks [7.0691139514420005]
Spiking Neural Networks (SNNs) are biologically realistic and practically promising in low-power because of their event-driven mechanism.
A conversion scheme is proposed to obtain competitive accuracy by mapping trained ANNs' parameters to SNNs with the same structures.
A novel SNN training framework is proposed, namely layer-wise ANN-to-SNN knowledge distillation (LaSNN)
arXiv Detail & Related papers (2023-04-17T03:49:35Z) - Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency
Spiking Neural Networks [22.532709609646066]
Spiking Neural Networks (SNNs) have gained great attraction due to their distinctive properties of low power consumption and fast inference on neuromorphic hardware.
As the most effective method to get deep SNNs, ANN-SNN conversion has achieved comparable performance as ANNs on large-scale datasets.
In this paper, we theoretically analyze ANN-SNN conversion error and derive the estimated activation function of SNNs.
We prove that the expected conversion error between SNNs and ANNs is zero, enabling us to achieve high-accuracy and ultra-low-latency SNN
arXiv Detail & Related papers (2023-03-08T03:04:53Z) - Bridging the Gap between ANNs and SNNs by Calibrating Offset Spikes [19.85338979292052]
Spiking Neural Networks (SNNs) have attracted great attention due to their distinctive characteristics of low power consumption and temporal information processing.
ANN-SNN conversion, as the most commonly used training method for applying SNNs, can ensure that converted SNNs achieve comparable performance to ANNs on large-scale datasets.
In this paper, instead of evaluating different conversion errors and then eliminating these errors, we define an offset spike to measure the degree of deviation between actual and desired SNN firing rates.
arXiv Detail & Related papers (2023-02-21T14:10:56Z) - SNN2ANN: A Fast and Memory-Efficient Training Framework for Spiking
Neural Networks [117.56823277328803]
Spiking neural networks are efficient computation models for low-power environments.
We propose a SNN-to-ANN (SNN2ANN) framework to train the SNN in a fast and memory-efficient way.
Experiment results show that our SNN2ANN-based models perform well on the benchmark datasets.
arXiv Detail & Related papers (2022-06-19T16:52:56Z) - Optimized Potential Initialization for Low-latency Spiking Neural
Networks [21.688402090967497]
Spiking Neural Networks (SNNs) have been attached great importance due to the distinctive properties of low power consumption, biological plausibility, and adversarial robustness.
The most effective way to train deep SNNs is through ANN-to-SNN conversion, which have yielded the best performance in deep network structure and large-scale datasets.
In this paper, we aim to achieve high-performance converted SNNs with extremely low latency (fewer than 32 time-steps)
arXiv Detail & Related papers (2022-02-03T07:15:43Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Deep Time Delay Neural Network for Speech Enhancement with Full Data
Learning [60.20150317299749]
This paper proposes a deep time delay neural network (TDNN) for speech enhancement with full data learning.
To make full use of the training data, we propose a full data learning method for speech enhancement.
arXiv Detail & Related papers (2020-11-11T06:32:37Z) - Kernel Based Progressive Distillation for Adder Neural Networks [71.731127378807]
Adder Neural Networks (ANNs) which only contain additions bring us a new way of developing deep neural networks with low energy consumption.
There is an accuracy drop when replacing all convolution filters by adder filters.
We present a novel method for further improving the performance of ANNs without increasing the trainable parameters.
arXiv Detail & Related papers (2020-09-28T03:29:19Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.