Bridging the Gap between ANNs and SNNs by Calibrating Offset Spikes
- URL: http://arxiv.org/abs/2302.10685v1
- Date: Tue, 21 Feb 2023 14:10:56 GMT
- Title: Bridging the Gap between ANNs and SNNs by Calibrating Offset Spikes
- Authors: Zecheng Hao, Jianhao Ding, Tong Bu, Tiejun Huang, Zhaofei Yu
- Abstract summary: Spiking Neural Networks (SNNs) have attracted great attention due to their distinctive characteristics of low power consumption and temporal information processing.
ANN-SNN conversion, as the most commonly used training method for applying SNNs, can ensure that converted SNNs achieve comparable performance to ANNs on large-scale datasets.
In this paper, instead of evaluating different conversion errors and then eliminating these errors, we define an offset spike to measure the degree of deviation between actual and desired SNN firing rates.
- Score: 19.85338979292052
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) have attracted great attention due to their
distinctive characteristics of low power consumption and temporal information
processing. ANN-SNN conversion, as the most commonly used training method for
applying SNNs, can ensure that converted SNNs achieve comparable performance to
ANNs on large-scale datasets. However, the performance degrades severely under
low quantities of time-steps, which hampers the practical applications of SNNs
to neuromorphic chips. In this paper, instead of evaluating different
conversion errors and then eliminating these errors, we define an offset spike
to measure the degree of deviation between actual and desired SNN firing rates.
We perform a detailed analysis of offset spike and note that the firing of one
additional (or one less) spike is the main cause of conversion errors. Based on
this, we propose an optimization strategy based on shifting the initial
membrane potential and we theoretically prove the corresponding optimal
shifting distance for calibrating the spike. In addition, we also note that our
method has a unique iterative property that enables further reduction of
conversion errors. The experimental results show that our proposed method
achieves state-of-the-art performance on CIFAR-10, CIFAR-100, and ImageNet
datasets. For example, we reach a top-1 accuracy of 67.12% on ImageNet when
using 6 time-steps. To the best of our knowledge, this is the first time an
ANN-SNN conversion has been shown to simultaneously achieve high accuracy and
ultralow latency on complex datasets. Code is available at
https://github.com/hzc1208/ANN2SNN_COS.
Related papers
- Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks [50.32980443749865]
Spiking neural networks (SNNs) have garnered significant attention for their low power consumption and high biologicalability.
Current SNNs struggle to balance accuracy and latency in neuromorphic datasets.
We propose Step-wise Distillation (HSD) method, tailored for neuromorphic datasets.
arXiv Detail & Related papers (2024-09-19T06:52:34Z) - One-Spike SNN: Single-Spike Phase Coding with Base Manipulation for ANN-to-SNN Conversion Loss Minimization [0.41436032949434404]
As spiking neural networks (SNNs) are event-driven, energy efficiency is higher than conventional artificial neural networks (ANNs)
In this work, we propose a single-spike phase coding as an encoding scheme that minimizes the number of spikes to transfer data between SNN layers.
Without any additional retraining or architectural constraints on ANNs, the proposed conversion method does not lose inference accuracy (0.58% on average) verified on three convolutional neural networks (CNNs) with CIFAR and ImageNet datasets.
arXiv Detail & Related papers (2024-01-30T02:00:28Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Reducing ANN-SNN Conversion Error through Residual Membrane Potential [19.85338979292052]
Spiking Neural Networks (SNNs) have received extensive academic attention due to the unique properties of low power consumption and high-speed computing on neuromorphic chips.
In this paper, we make a detailed analysis of unevenness error and divide it into four categories.
We propose an optimization strategy based on residual membrane potential to reduce unevenness error.
arXiv Detail & Related papers (2023-02-04T04:44:31Z) - Towards Lossless ANN-SNN Conversion under Ultra-Low Latency with Dual-Phase Optimization [30.098268054714048]
Spiking neural networks (SNNs) operating with asynchronous discrete events show higher energy efficiency with sparse computation.
A popular approach for implementing deep SNNs is ANN-SNN conversion combining both efficient training of ANNs and efficient inference of SNNs.
In this paper, we first identify that such performance degradation stems from the misrepresentation of the negative or overflow residual membrane potential in SNNs.
Inspired by this, we decompose the conversion error into three parts: quantization error, clipping error, and residual membrane potential representation error.
arXiv Detail & Related papers (2022-05-16T06:53:14Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Optimized Potential Initialization for Low-latency Spiking Neural
Networks [21.688402090967497]
Spiking Neural Networks (SNNs) have been attached great importance due to the distinctive properties of low power consumption, biological plausibility, and adversarial robustness.
The most effective way to train deep SNNs is through ANN-to-SNN conversion, which have yielded the best performance in deep network structure and large-scale datasets.
In this paper, we aim to achieve high-performance converted SNNs with extremely low latency (fewer than 32 time-steps)
arXiv Detail & Related papers (2022-02-03T07:15:43Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - A Free Lunch From ANN: Towards Efficient, Accurate Spiking Neural
Networks Calibration [11.014383784032084]
Spiking Neural Network (SNN) has been recognized as one of the next generation of neural networks.
We show that a proper way to calibrate the parameters during the conversion of ANN to SNN can bring significant improvements.
arXiv Detail & Related papers (2021-06-13T13:20:12Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.