Energy-efficient Knowledge Distillation for Spiking Neural Networks
- URL: http://arxiv.org/abs/2106.07172v2
- Date: Mon, 27 Jun 2022 13:39:31 GMT
- Title: Energy-efficient Knowledge Distillation for Spiking Neural Networks
- Authors: Dongjin Lee, Seongsik Park, Jongwan Kim, Wuhyeong Doh, Sungroh Yoon
- Abstract summary: Spiking neural networks (SNNs) have been gaining interest as energy-efficient alternatives of conventional artificial neural networks (ANNs)
We analyze the performance of distilled SNN model in terms of accuracy and energy efficiency.
We propose a novel knowledge distillation method with heterogeneous temperature parameters to achieve energy efficiency.
- Score: 23.16389219900427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking neural networks (SNNs) have been gaining interest as energy-efficient
alternatives of conventional artificial neural networks (ANNs) due to their
event-driven computation. Considering the future deployment of SNN models to
constrained neuromorphic devices, many studies have applied techniques
originally used for ANN model compression, such as network quantization,
pruning, and knowledge distillation, to SNNs. Among them, existing works on
knowledge distillation reported accuracy improvements of student SNN model.
However, analysis on energy efficiency, which is also an important feature of
SNN, was absent. In this paper, we thoroughly analyze the performance of the
distilled SNN model in terms of accuracy and energy efficiency. In the process,
we observe a substantial increase in the number of spikes, leading to energy
inefficiency, when using the conventional knowledge distillation methods. Based
on this analysis, to achieve energy efficiency, we propose a novel knowledge
distillation method with heterogeneous temperature parameters. We evaluate our
method on two different datasets and show that the resulting SNN student
satisfies both accuracy improvement and reduction of the number of spikes. On
MNIST dataset, our proposed student SNN achieves up to 0.09% higher accuracy
and produces 65% less spikes compared to the student SNN trained with
conventional knowledge distillation method. We also compare the results with
other SNN compression techniques and training methods.
Related papers
- BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation [20.34272550256856]
Spiking neural networks (SNNs) mimic biological neural system to convey information via discrete spikes.
Our work achieves state-of-the-art performance for training SNNs on both static and neuromorphic datasets.
arXiv Detail & Related papers (2024-07-12T08:17:24Z) - Efficient and Effective Time-Series Forecasting with Spiking Neural Networks [47.371024581669516]
Spiking neural networks (SNNs) provide a unique pathway for capturing the intricacies of temporal data.
Applying SNNs to time-series forecasting is challenging due to difficulties in effective temporal alignment, complexities in encoding processes, and the absence of standardized guidelines for model selection.
We propose a framework for SNNs in time-series forecasting tasks, leveraging the efficiency of spiking neurons in processing temporal information.
arXiv Detail & Related papers (2024-02-02T16:23:50Z) - Fully Spiking Denoising Diffusion Implicit Models [61.32076130121347]
Spiking neural networks (SNNs) have garnered considerable attention owing to their ability to run on neuromorphic devices with super-high speeds.
We propose a novel approach fully spiking denoising diffusion implicit model (FSDDIM) to construct a diffusion model within SNNs.
We demonstrate that the proposed method outperforms the state-of-the-art fully spiking generative model.
arXiv Detail & Related papers (2023-12-04T09:07:09Z) - Artificial to Spiking Neural Networks Conversion for Scientific Machine
Learning [24.799635365988905]
We introduce a method to convert Physics-Informed Neural Networks (PINNs) to Spiking Neural Networks (SNNs)
SNNs are expected to have higher energy efficiency compared to traditional Artificial Neural Networks (ANNs)
arXiv Detail & Related papers (2023-08-31T00:21:27Z) - Skip Connections in Spiking Neural Networks: An Analysis of Their Effect
on Network Training [0.8602553195689513]
Spiking neural networks (SNNs) have gained attention as a promising alternative to traditional artificial neural networks (ANNs)
In this paper, we study the impact of skip connections on SNNs and propose a hyper parameter optimization technique that adapts models from ANN to SNN.
We demonstrate that optimizing the position, type, and number of skip connections can significantly improve the accuracy and efficiency of SNNs.
arXiv Detail & Related papers (2023-03-23T07:57:32Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - tinySNN: Towards Memory- and Energy-Efficient Spiking Neural Networks [14.916996986290902]
Spiking Neural Network (SNN) models are typically favorable as they can offer higher accuracy.
However, employing such models on the resource- and energy-constrained embedded platforms is inefficient.
We present a tinySNN framework that optimize the memory and energy requirements of SNN processing.
arXiv Detail & Related papers (2022-06-17T09:40:40Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - On Self-Distilling Graph Neural Network [64.00508355508106]
We propose the first teacher-free knowledge distillation method for GNNs, termed GNN Self-Distillation (GNN-SD)
The method is built upon the proposed neighborhood discrepancy rate (NDR), which quantifies the non-smoothness of the embedded graph in an efficient way.
We also summarize a generic GNN-SD framework that could be exploited to induce other distillation strategies.
arXiv Detail & Related papers (2020-11-04T12:29:33Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Distilling Spikes: Knowledge Distillation in Spiking Neural Networks [22.331135708302586]
Spiking Neural Networks (SNNs) are energy-efficient computing architectures that exchange spikes for processing information.
We propose techniques for knowledge distillation in spiking neural networks for the task of image classification.
Our approach is expected to open up new avenues for deploying high performing large SNN models on resource-constrained hardware platforms.
arXiv Detail & Related papers (2020-05-01T09:36:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.