Energy-efficient Knowledge Distillation for Spiking Neural Networks
- URL: http://arxiv.org/abs/2106.07172v2
- Date: Mon, 27 Jun 2022 13:39:31 GMT
- Title: Energy-efficient Knowledge Distillation for Spiking Neural Networks
- Authors: Dongjin Lee, Seongsik Park, Jongwan Kim, Wuhyeong Doh, Sungroh Yoon
- Abstract summary: Spiking neural networks (SNNs) have been gaining interest as energy-efficient alternatives of conventional artificial neural networks (ANNs)
We analyze the performance of distilled SNN model in terms of accuracy and energy efficiency.
We propose a novel knowledge distillation method with heterogeneous temperature parameters to achieve energy efficiency.
- Score: 23.16389219900427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking neural networks (SNNs) have been gaining interest as energy-efficient
alternatives of conventional artificial neural networks (ANNs) due to their
event-driven computation. Considering the future deployment of SNN models to
constrained neuromorphic devices, many studies have applied techniques
originally used for ANN model compression, such as network quantization,
pruning, and knowledge distillation, to SNNs. Among them, existing works on
knowledge distillation reported accuracy improvements of student SNN model.
However, analysis on energy efficiency, which is also an important feature of
SNN, was absent. In this paper, we thoroughly analyze the performance of the
distilled SNN model in terms of accuracy and energy efficiency. In the process,
we observe a substantial increase in the number of spikes, leading to energy
inefficiency, when using the conventional knowledge distillation methods. Based
on this analysis, to achieve energy efficiency, we propose a novel knowledge
distillation method with heterogeneous temperature parameters. We evaluate our
method on two different datasets and show that the resulting SNN student
satisfies both accuracy improvement and reduction of the number of spikes. On
MNIST dataset, our proposed student SNN achieves up to 0.09% higher accuracy
and produces 65% less spikes compared to the student SNN trained with
conventional knowledge distillation method. We also compare the results with
other SNN compression techniques and training methods.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.