Distilling Spikes: Knowledge Distillation in Spiking Neural Networks
- URL: http://arxiv.org/abs/2005.00288v1
- Date: Fri, 1 May 2020 09:36:32 GMT
- Title: Distilling Spikes: Knowledge Distillation in Spiking Neural Networks
- Authors: Ravi Kumar Kushawaha, Saurabh Kumar, Biplab Banerjee, Rajbabu
Velmurugan
- Abstract summary: Spiking Neural Networks (SNNs) are energy-efficient computing architectures that exchange spikes for processing information.
We propose techniques for knowledge distillation in spiking neural networks for the task of image classification.
Our approach is expected to open up new avenues for deploying high performing large SNN models on resource-constrained hardware platforms.
- Score: 22.331135708302586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNN) are energy-efficient computing architectures
that exchange spikes for processing information, unlike classical Artificial
Neural Networks (ANN). Due to this, SNNs are better suited for real-life
deployments. However, similar to ANNs, SNNs also benefit from deeper
architectures to obtain improved performance. Furthermore, like the deep ANNs,
the memory, compute and power requirements of SNNs also increase with model
size, and model compression becomes a necessity. Knowledge distillation is a
model compression technique that enables transferring the learning of a large
machine learning model to a smaller model with minimal loss in performance. In
this paper, we propose techniques for knowledge distillation in spiking neural
networks for the task of image classification. We present ways to distill
spikes from a larger SNN, also called the teacher network, to a smaller one,
also called the student network, while minimally impacting the classification
accuracy. We demonstrate the effectiveness of the proposed method with detailed
experiments on three standard datasets while proposing novel distillation
methodologies and loss functions. We also present a multi-stage knowledge
distillation technique for SNNs using an intermediate network to obtain higher
performance from the student network. Our approach is expected to open up new
avenues for deploying high performing large SNN models on resource-constrained
hardware platforms.
Related papers
- BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation [20.34272550256856]
Spiking neural networks (SNNs) mimic biological neural system to convey information via discrete spikes.
Our work achieves state-of-the-art performance for training SNNs on both static and neuromorphic datasets.
arXiv Detail & Related papers (2024-07-12T08:17:24Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Constructing Deep Spiking Neural Networks from Artificial Neural
Networks with Knowledge Distillation [20.487853773309563]
Spiking neural networks (SNNs) are well known as the brain-inspired models with high computing efficiency.
We propose a novel method of constructing deep SNN models with knowledge distillation (KD)
arXiv Detail & Related papers (2023-04-12T05:57:21Z) - Skip Connections in Spiking Neural Networks: An Analysis of Their Effect
on Network Training [0.8602553195689513]
Spiking neural networks (SNNs) have gained attention as a promising alternative to traditional artificial neural networks (ANNs)
In this paper, we study the impact of skip connections on SNNs and propose a hyper parameter optimization technique that adapts models from ANN to SNN.
We demonstrate that optimizing the position, type, and number of skip connections can significantly improve the accuracy and efficiency of SNNs.
arXiv Detail & Related papers (2023-03-23T07:57:32Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Mining the Weights Knowledge for Optimizing Neural Network Structures [1.995792341399967]
We introduce a switcher neural network (SNN) that uses as inputs the weights of a task-specific neural network (called TNN for short)
By mining the knowledge contained in the weights, the SNN outputs scaling factors for turning off neurons in the TNN.
In terms of accuracy, we outperform baseline networks and other structure learning methods stably and significantly.
arXiv Detail & Related papers (2021-10-11T05:20:56Z) - Energy-efficient Knowledge Distillation for Spiking Neural Networks [23.16389219900427]
Spiking neural networks (SNNs) have been gaining interest as energy-efficient alternatives of conventional artificial neural networks (ANNs)
We analyze the performance of distilled SNN model in terms of accuracy and energy efficiency.
We propose a novel knowledge distillation method with heterogeneous temperature parameters to achieve energy efficiency.
arXiv Detail & Related papers (2021-06-14T05:42:05Z) - Pruning of Deep Spiking Neural Networks through Gradient Rewiring [41.64961999525415]
Spiking Neural Networks (SNNs) have been attached great importance due to their biological plausibility and high energy-efficiency on neuromorphic chips.
Most existing methods directly apply pruning approaches in artificial neural networks (ANNs) to SNNs, which ignore the difference between ANNs and SNNs.
We propose gradient rewiring (Grad R), a joint learning algorithm of connectivity and weight for SNNs, that enables us to seamlessly optimize network structure without retrain.
arXiv Detail & Related papers (2021-05-11T10:05:53Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.