Efficient ANN-Guided Distillation: Aligning Rate-based Features of Spiking Neural Networks through Hybrid Block-wise Replacement
- URL: http://arxiv.org/abs/2503.16572v1
- Date: Thu, 20 Mar 2025 09:04:38 GMT
- Title: Efficient ANN-Guided Distillation: Aligning Rate-based Features of Spiking Neural Networks through Hybrid Block-wise Replacement
- Authors: Shu Yang, Chengting Yu, Lei Liu, Hanzhi Ma, Aili Wang, Erping Li,
- Abstract summary: Spiking Neural Networks (SNNs) have garnered considerable attention as a potential alternative to Artificial Neural Networks (ANNs)<n>Recent studies have highlighted SNNs' potential on large-scale datasets.
- Score: 3.4776100606469096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) have garnered considerable attention as a potential alternative to Artificial Neural Networks (ANNs). Recent studies have highlighted SNNs' potential on large-scale datasets. For SNN training, two main approaches exist: direct training and ANN-to-SNN (ANN2SNN) conversion. To fully leverage existing ANN models in guiding SNN learning, either direct ANN-to-SNN conversion or ANN-SNN distillation training can be employed. In this paper, we propose an ANN-SNN distillation framework from the ANN-to-SNN perspective, designed with a block-wise replacement strategy for ANN-guided learning. By generating intermediate hybrid models that progressively align SNN feature spaces to those of ANN through rate-based features, our framework naturally incorporates rate-based backpropagation as a training method. Our approach achieves results comparable to or better than state-of-the-art SNN distillation methods, showing both training and learning efficiency.
Related papers
- Bridge the Gap between SNN and ANN for Image Restoration [7.487270862599671]
Currently, neural networks based on the SNN (Spiking Neural Network) framework are beginning to make their mark in the field of image restoration.
We propose a novel distillation technique, called asymmetric framework (ANN-SNN) distillation, in which the teacher is an ANN and the student is an SNN.
Specifically, we leverage the intermediate features (feature maps) learned by the ANN as hints to guide the training process of the SNN.
arXiv Detail & Related papers (2025-04-02T14:12:06Z) - NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - LM-HT SNN: Enhancing the Performance of SNN to ANN Counterpart through Learnable Multi-hierarchical Threshold Model [42.13762207316681]
Spiking Neural Network (SNN) has garnered widespread academic interest for its intrinsic ability to transmit information in a more energy-efficient manner.
Despite previous efforts to optimize the learning algorithm of SNNs through various methods, SNNs still lag behind ANNs in terms of performance.
We propose a novel LM-HT model, which is an equidistant multi-threshold model that can dynamically regulate the global input current and membrane potential leakage.
arXiv Detail & Related papers (2024-02-01T08:10:39Z) - LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - SPENSER: Towards a NeuroEvolutionary Approach for Convolutional Spiking
Neural Networks [0.0]
Spiking Neural Networks (SNNs) have attracted recent interest due to their energy efficiency and biological plausibility.
There is no consensus on the best learning algorithm for SNNs.
In this paper, we propose SPENSER, a framework for SNN generation based on DENSER.
arXiv Detail & Related papers (2023-05-18T14:06:37Z) - Joint A-SNN: Joint Training of Artificial and Spiking Neural Networks
via Self-Distillation and Weight Factorization [12.1610509770913]
Spiking Neural Networks (SNNs) mimic the spiking nature of brain neurons.
We propose a joint training framework of ANN and SNN, in which the ANN can guide the SNN's optimization.
Our method consistently outperforms many other state-of-the-art training methods.
arXiv Detail & Related papers (2023-05-03T13:12:17Z) - LaSNN: Layer-wise ANN-to-SNN Distillation for Effective and Efficient
Training in Deep Spiking Neural Networks [7.0691139514420005]
Spiking Neural Networks (SNNs) are biologically realistic and practically promising in low-power because of their event-driven mechanism.
A conversion scheme is proposed to obtain competitive accuracy by mapping trained ANNs' parameters to SNNs with the same structures.
A novel SNN training framework is proposed, namely layer-wise ANN-to-SNN knowledge distillation (LaSNN)
arXiv Detail & Related papers (2023-04-17T03:49:35Z) - A Hybrid ANN-SNN Architecture for Low-Power and Low-Latency Visual Perception [27.144985031646932]
Spiking Neural Networks (SNN) are a class of bio-inspired neural networks that promise to bring low-power and low-latency inference to edge devices.
We show for the task of event-based 2D and 3D human pose estimation that our method consumes 88% less power with only a 4% decrease in performance compared to its fully ANN counterparts.
arXiv Detail & Related papers (2023-03-24T17:38:45Z) - Hybrid Spiking Neural Network Fine-tuning for Hippocampus Segmentation [3.1247096708403914]
Spiking neural networks (SNNs) have emerged as a low-power alternative to artificial neural networks (ANNs)
In this work, we propose a hybrid SNN training scheme and apply it to segment human hippocampi from magnetic resonance images.
arXiv Detail & Related papers (2023-02-14T20:18:57Z) - SNN2ANN: A Fast and Memory-Efficient Training Framework for Spiking
Neural Networks [117.56823277328803]
Spiking neural networks are efficient computation models for low-power environments.
We propose a SNN-to-ANN (SNN2ANN) framework to train the SNN in a fast and memory-efficient way.
Experiment results show that our SNN2ANN-based models perform well on the benchmark datasets.
arXiv Detail & Related papers (2022-06-19T16:52:56Z) - Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep
Spiking Neural Networks [43.046402416604245]
Spiking Neural Networks (SNNs) are bio-inspired energy-efficient neural networks.
In this paper, we theoretically analyze ANN-SNN conversion and derive sufficient conditions of the optimal conversion.
We show that the proposed method achieves near loss less conversion with VGG-16, PreActResNet-18, and deeper structures.
arXiv Detail & Related papers (2021-05-25T04:15:06Z) - Kernel Based Progressive Distillation for Adder Neural Networks [71.731127378807]
Adder Neural Networks (ANNs) which only contain additions bring us a new way of developing deep neural networks with low energy consumption.
There is an accuracy drop when replacing all convolution filters by adder filters.
We present a novel method for further improving the performance of ANNs without increasing the trainable parameters.
arXiv Detail & Related papers (2020-09-28T03:29:19Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Training Deep Spiking Neural Networks [0.0]
Brain-inspired spiking neural networks (SNNs) with neuromorphic hardware may offer orders of magnitude higher energy efficiency.
We show that is is possible to train SNN with ResNet50 architecture on CIFAR100 and Imagenette object recognition datasets.
The trained SNN falls behind in accuracy compared to analogous ANN but requires several orders of magnitude less inference time steps.
arXiv Detail & Related papers (2020-06-08T09:47:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.