Spike Calibration: Fast and Accurate Conversion of Spiking Neural
Network for Object Detection and Segmentation
- URL: http://arxiv.org/abs/2207.02702v1
- Date: Wed, 6 Jul 2022 14:18:44 GMT
- Title: Spike Calibration: Fast and Accurate Conversion of Spiking Neural
Network for Object Detection and Segmentation
- Authors: Yang Li, Xiang He, Yiting Dong, Qingqun Kong, Yi Zeng
- Abstract summary: Spiking neural network (SNN) has been attached to great importance due to the properties of high biological plausibility and low energy consumption on neuromorphic hardware.
As an efficient method to obtain deep SNN, the conversion method has exhibited high performance on various large-scale datasets.
- Score: 12.479603018237729
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural network (SNN) has been attached to great importance due to the
properties of high biological plausibility and low energy consumption on
neuromorphic hardware. As an efficient method to obtain deep SNN, the
conversion method has exhibited high performance on various large-scale
datasets. However, it typically suffers from severe performance degradation and
high time delays. In particular, most of the previous work focuses on simple
classification tasks while ignoring the precise approximation to ANN output. In
this paper, we first theoretically analyze the conversion errors and derive the
harmful effects of time-varying extremes on synaptic currents. We propose the
Spike Calibration (SpiCalib) to eliminate the damage of discrete spikes to the
output distribution and modify the LIPooling to allow conversion of the
arbitrary MaxPooling layer losslessly. Moreover, Bayesian optimization for
optimal normalization parameters is proposed to avoid empirical settings. The
experimental results demonstrate the state-of-the-art performance on
classification, object detection, and segmentation tasks. To the best of our
knowledge, this is the first time to obtain SNN comparable to ANN on these
tasks simultaneously. Moreover, we only need 1/50 inference time of the
previous work on the detection task and can achieve the same performance under
0.492$\times$ energy consumption of ANN on the segmentation task.
Related papers
- Low Latency of object detection for spikng neural network [3.404826786562694]
Spiking Neural Networks are well-suited for edge AI applications due to their binary spike nature.
In this paper, we focus on generating highly accurate and low-latency SNNs specifically for object detection.
arXiv Detail & Related papers (2023-09-27T10:26:19Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Spiking Neural Network for Ultra-low-latency and High-accurate Object
Detection [18.037802439500858]
Spiking Neural Networks (SNNs) have garnered widespread interest for their energy efficiency and brain-inspired event-driven properties.
Recent methods like Spiking-YOLO have expanded the SNNs to more challenging object detection tasks.
They often suffer from high latency and low detection accuracy, making them difficult to deploy on latency sensitive mobile platforms.
arXiv Detail & Related papers (2023-06-21T04:21:40Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Bridging the Gap between ANNs and SNNs by Calibrating Offset Spikes [19.85338979292052]
Spiking Neural Networks (SNNs) have attracted great attention due to their distinctive characteristics of low power consumption and temporal information processing.
ANN-SNN conversion, as the most commonly used training method for applying SNNs, can ensure that converted SNNs achieve comparable performance to ANNs on large-scale datasets.
In this paper, instead of evaluating different conversion errors and then eliminating these errors, we define an offset spike to measure the degree of deviation between actual and desired SNN firing rates.
arXiv Detail & Related papers (2023-02-21T14:10:56Z) - Reducing ANN-SNN Conversion Error through Residual Membrane Potential [19.85338979292052]
Spiking Neural Networks (SNNs) have received extensive academic attention due to the unique properties of low power consumption and high-speed computing on neuromorphic chips.
In this paper, we make a detailed analysis of unevenness error and divide it into four categories.
We propose an optimization strategy based on residual membrane potential to reduce unevenness error.
arXiv Detail & Related papers (2023-02-04T04:44:31Z) - Fast Exploration of the Impact of Precision Reduction on Spiking Neural
Networks [63.614519238823206]
Spiking Neural Networks (SNNs) are a practical choice when the target hardware reaches the edge of computing.
We employ an Interval Arithmetic (IA) model to develop an exploration methodology that takes advantage of the capability of such a model to propagate the approximation error.
arXiv Detail & Related papers (2022-11-22T15:08:05Z) - Towards Lossless ANN-SNN Conversion under Ultra-Low Latency with Dual-Phase Optimization [30.098268054714048]
Spiking neural networks (SNNs) operating with asynchronous discrete events show higher energy efficiency with sparse computation.
A popular approach for implementing deep SNNs is ANN-SNN conversion combining both efficient training of ANNs and efficient inference of SNNs.
In this paper, we first identify that such performance degradation stems from the misrepresentation of the negative or overflow residual membrane potential in SNNs.
Inspired by this, we decompose the conversion error into three parts: quantization error, clipping error, and residual membrane potential representation error.
arXiv Detail & Related papers (2022-05-16T06:53:14Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.