Training-free Conversion of Pretrained ANNs to SNNs for Low-Power and High-Performance Applications
- URL: http://arxiv.org/abs/2409.03368v1
- Date: Thu, 5 Sep 2024 09:14:44 GMT
- Title: Training-free Conversion of Pretrained ANNs to SNNs for Low-Power and High-Performance Applications
- Authors: Tong Bu, Maohua Li, Zhaofei Yu,
- Abstract summary: Spiking Neural Networks (SNNs) have emerged as a promising substitute for Artificial Neural Networks (ANNs)
Existing supervised learning algorithms for SNNs require significantly more memory and time than their ANN counterparts.
Our approach directly converts pre-trained ANN models into high-performance SNNs without additional training.
- Score: 23.502136316777058
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking Neural Networks (SNNs) have emerged as a promising substitute for Artificial Neural Networks (ANNs) due to their advantages of fast inference and low power consumption. However, the lack of efficient training algorithms has hindered their widespread adoption. Existing supervised learning algorithms for SNNs require significantly more memory and time than their ANN counterparts. Even commonly used ANN-SNN conversion methods necessitate re-training of ANNs to enhance conversion efficiency, incurring additional computational costs. To address these challenges, we propose a novel training-free ANN-SNN conversion pipeline. Our approach directly converts pre-trained ANN models into high-performance SNNs without additional training. The conversion pipeline includes a local-learning-based threshold balancing algorithm, which enables efficient calculation of the optimal thresholds and fine-grained adjustment of threshold value by channel-wise scaling. We demonstrate the scalability of our framework across three typical computer vision tasks: image classification, semantic segmentation, and object detection. This showcases its applicability to both classification and regression tasks. Moreover, we have evaluated the energy consumption of the converted SNNs, demonstrating their superior low-power advantage compared to conventional ANNs. Our training-free algorithm outperforms existing methods, highlighting its practical applicability and efficiency. This approach simplifies the deployment of SNNs by leveraging open-source pre-trained ANN models and neuromorphic hardware, enabling fast, low-power inference with negligible performance reduction.
Related papers
- Differential Coding for Training-Free ANN-to-SNN Conversion [45.70141988713627]
Spiking Neural Networks (SNNs) exhibit significant potential due to their low energy consumption.
converting Artificial Neural Networks (ANNs) to SNNs is an efficient way to achieve high-performance SNNs.
This article introduces differential coding for ANN-to-SNN conversion, a novel coding scheme that reduces spike counts and energy consumption.
arXiv Detail & Related papers (2025-03-01T02:17:35Z) - Adaptive Calibration: A Unified Conversion Framework of Spiking Neural Network [1.5215973379400674]
Spiking Neural Networks (SNNs) are seen as an energy-efficient alternative to traditional Artificial Neural Networks (ANNs)
We present a unified training-free conversion framework that significantly enhances both the performance and efficiency of converted SNNs.
arXiv Detail & Related papers (2024-12-18T09:38:54Z) - Faster and Stronger: When ANN-SNN Conversion Meets Parallel Spiking Calculation [45.67180051148674]
Spiking Neural Network (SNN), as a brain-inspired and energy-efficient network, is facing the pivotal challenge of exploring a suitable learning framework.
We propose a novel parallel conversion learning framework, which establishes a mathematical mapping relationship between each time-step of the parallel spiking neurons.
arXiv Detail & Related papers (2024-12-18T08:37:13Z) - Converting High-Performance and Low-Latency SNNs through Explicit Modelling of Residual Error in ANNs [27.46147049872907]
Spiking neural networks (SNNs) have garnered interest due to their energy efficiency and superior effectiveness on neuromorphic chips.
One of the mainstream approaches to implementing deep SNNs is the ANN-SNN conversion.
We propose a new approach based on explicit modeling of residual errors as additive noise.
arXiv Detail & Related papers (2024-04-26T14:50:46Z) - Adaptive Calibration: A Unified Conversion Framework of Spiking Neural Networks [1.632439547798896]
Spiking Neural Networks (SNNs) have emerged as a promising energy-efficient alternative to traditional Artificial Neural Networks (ANNs)
This paper focuses on addressing the dual objectives of enhancing the performance and efficiency of SNNs through the established SNN conversion framework.
arXiv Detail & Related papers (2023-11-24T03:43:59Z) - LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Optimized Potential Initialization for Low-latency Spiking Neural
Networks [21.688402090967497]
Spiking Neural Networks (SNNs) have been attached great importance due to the distinctive properties of low power consumption, biological plausibility, and adversarial robustness.
The most effective way to train deep SNNs is through ANN-to-SNN conversion, which have yielded the best performance in deep network structure and large-scale datasets.
In this paper, we aim to achieve high-performance converted SNNs with extremely low latency (fewer than 32 time-steps)
arXiv Detail & Related papers (2022-02-03T07:15:43Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Optimal Conversion of Conventional Artificial Neural Networks to Spiking
Neural Networks [0.0]
Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs)
We propose a novel strategic pipeline that transfers the weights to the target SNN by combining threshold balance and soft-reset mechanisms.
Our method is promising to get implanted onto embedded platforms with better support of SNNs with limited energy and memory.
arXiv Detail & Related papers (2021-02-28T12:04:22Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.