Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN
- URL: http://arxiv.org/abs/2305.19868v1
- Date: Wed, 31 May 2023 14:04:41 GMT
- Title: Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN
- Authors: Yangfan Hu, Qian Zheng, Xudong Jiang, Gang Pan
- Abstract summary: Spiking neural networks (SNNs) have shown advantages in computation and energy efficiency.
It remains a challenge to train deep SNNs due to the discrete spike function.
This paper proposes Fast-SNN that achieves high performance with low latency.
- Score: 38.18008827711246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks (SNNs) have shown advantages in computation and
energy efficiency over traditional artificial neural networks (ANNs) thanks to
their event-driven representations. SNNs also replace weight multiplications in
ANNs with additions, which are more energy-efficient and less computationally
intensive. However, it remains a challenge to train deep SNNs due to the
discrete spike function. A popular approach to circumvent this challenge is
ANN-to-SNN conversion. However, due to the quantization error and accumulating
error, it often requires lots of time steps (high inference latency) to achieve
high performance, which negates SNN's advantages. To this end, this paper
proposes Fast-SNN that achieves high performance with low latency. We
demonstrate the equivalent mapping between temporal quantization in SNNs and
spatial quantization in ANNs, based on which the minimization of the
quantization error is transferred to quantized ANN training. With the
minimization of the quantization error, we show that the sequential error is
the primary cause of the accumulating error, which is addressed by introducing
a signed IF neuron model and a layer-wise fine-tuning mechanism. Our method
achieves state-of-the-art performance and low latency on various computer
vision tasks, including image classification, object detection, and semantic
segmentation. Codes are available at: https://github.com/yangfan-hu/Fast-SNN.
Related papers
- Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks [50.32980443749865]
Spiking neural networks (SNNs) have garnered significant attention for their low power consumption and high biologicalability.
Current SNNs struggle to balance accuracy and latency in neuromorphic datasets.
We propose Step-wise Distillation (HSD) method, tailored for neuromorphic datasets.
arXiv Detail & Related papers (2024-09-19T06:52:34Z) - Optimal ANN-SNN Conversion with Group Neurons [39.14228133571838]
Spiking Neural Networks (SNNs) have emerged as a promising third generation of neural networks.
The lack of effective learning algorithms remains a challenge for SNNs.
We introduce a novel type of neuron called Group Neurons (GNs)
arXiv Detail & Related papers (2024-02-29T11:41:12Z) - When Bio-Inspired Computing meets Deep Learning: Low-Latency, Accurate,
& Energy-Efficient Spiking Neural Networks from Artificial Neural Networks [22.721987637571306]
Spiking Neural Networks (SNNs) are demonstrating comparable accuracy to convolutional neural networks (CNN)
ANN-to-SNN conversion has recently gained significant traction in developing deep SNNs with close to state-of-the-art (SOTA) test accuracy on complex image recognition tasks.
We propose a novel ANN-to-SNN conversion framework, that incurs an exponentially lower number of time steps compared to that required in the SOTA conversion approaches.
arXiv Detail & Related papers (2023-12-12T00:10:45Z) - Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency
Spiking Neural Networks [22.532709609646066]
Spiking Neural Networks (SNNs) have gained great attraction due to their distinctive properties of low power consumption and fast inference on neuromorphic hardware.
As the most effective method to get deep SNNs, ANN-SNN conversion has achieved comparable performance as ANNs on large-scale datasets.
In this paper, we theoretically analyze ANN-SNN conversion error and derive the estimated activation function of SNNs.
We prove that the expected conversion error between SNNs and ANNs is zero, enabling us to achieve high-accuracy and ultra-low-latency SNN
arXiv Detail & Related papers (2023-03-08T03:04:53Z) - Reducing ANN-SNN Conversion Error through Residual Membrane Potential [19.85338979292052]
Spiking Neural Networks (SNNs) have received extensive academic attention due to the unique properties of low power consumption and high-speed computing on neuromorphic chips.
In this paper, we make a detailed analysis of unevenness error and divide it into four categories.
We propose an optimization strategy based on residual membrane potential to reduce unevenness error.
arXiv Detail & Related papers (2023-02-04T04:44:31Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Optimal Conversion of Conventional Artificial Neural Networks to Spiking
Neural Networks [0.0]
Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs)
We propose a novel strategic pipeline that transfers the weights to the target SNN by combining threshold balance and soft-reset mechanisms.
Our method is promising to get implanted onto embedded platforms with better support of SNNs with limited energy and memory.
arXiv Detail & Related papers (2021-02-28T12:04:22Z) - FATNN: Fast and Accurate Ternary Neural Networks [89.07796377047619]
Ternary Neural Networks (TNNs) have received much attention due to being potentially orders of magnitude faster in inference, as well as more power efficient, than full-precision counterparts.
In this work, we show that, under some mild constraints, computational complexity of the ternary inner product can be reduced by a factor of 2.
We elaborately design an implementation-dependent ternary quantization algorithm to mitigate the performance gap.
arXiv Detail & Related papers (2020-08-12T04:26:18Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.