Lottery Ticket Hypothesis for Spiking Neural Networks
- URL: http://arxiv.org/abs/2207.01382v1
- Date: Mon, 4 Jul 2022 13:02:58 GMT
- Title: Lottery Ticket Hypothesis for Spiking Neural Networks
- Authors: Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai
Yin, and Priyadarshini Panda
- Abstract summary: Spiking Neural Networks (SNNs) have emerged as a new generation of low-power deep neural networks where binary spikes convey information across multiple timesteps.
We propose Early-Time (ET) ticket where we find the important weight connectivity from a smaller number of timesteps.
Our experiment results show that the proposed ET ticket reduces search time by up to 38% compared to IMP or EB methods.
- Score: 9.494176507095176
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking Neural Networks (SNNs) have recently emerged as a new generation of
low-power deep neural networks where binary spikes convey information across
multiple timesteps. Pruning for SNNs is highly important as they become
deployed on a resource-constraint mobile/edge device. The previous SNN pruning
works focus on shallow SNNs (2~6 layers), however, deeper SNNs (>16 layers) are
proposed by state-of-the-art SNN works, which is difficult to be compatible
with the current pruning work. To scale up a pruning technique toward deep
SNNs, we investigate Lottery Ticket Hypothesis (LTH) which states that dense
networks contain smaller subnetworks (i.e., winning tickets) that achieve
comparable performance to the dense networks. Our studies on LTH reveal that
the winning tickets consistently exist in deep SNNs across various datasets and
architectures, providing up to 97% sparsity without huge performance
degradation. However, the iterative searching process of LTH brings a huge
training computational cost when combined with the multiple timesteps of SNNs.
To alleviate such heavy searching cost, we propose Early-Time (ET) ticket where
we find the important weight connectivity from a smaller number of timesteps.
The proposed ET ticket can be seamlessly combined with common pruning
techniques for finding winning tickets, such as Iterative Magnitude Pruning
(IMP) and Early-Bird (EB) tickets. Our experiment results show that the
proposed ET ticket reduces search time by up to 38% compared to IMP or EB
methods.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - Pursing the Sparse Limitation of Spiking Deep Learning Structures [42.334835610250714]
Spiking Neural Networks (SNNs) are garnering increased attention for their superior computation and energy efficiency.
We introduce an innovative algorithm capable of simultaneously identifying both weight and patch-level winning tickets.
We demonstrate that our spiking lottery ticket achieves comparable or superior performance even when the model structure is extremely sparse.
arXiv Detail & Related papers (2023-11-18T17:00:40Z) - Rethinking Residual Connection in Training Large-Scale Spiking Neural
Networks [10.286425749417216]
Spiking Neural Network (SNN) is known as the most famous brain-inspired model.
Non-differentiable spiking mechanism makes it hard to train large-scale SNNs.
arXiv Detail & Related papers (2023-11-09T06:48:29Z) - LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - Uncovering the Representation of Spiking Neural Networks Trained with
Surrogate Gradient [11.0542573074431]
Spiking Neural Networks (SNNs) are recognized as the candidate for the next-generation neural networks due to their bio-plausibility and energy efficiency.
Recently, researchers have demonstrated that SNNs are able to achieve nearly state-of-the-art performance in image recognition tasks using surrogate gradient training.
arXiv Detail & Related papers (2023-04-25T19:08:29Z) - SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via
Jointly Architecture Searching and Parameter Pruning [35.206651222618675]
We propose a two-in-one training scheme for efficient deep neural networks (DNNs) and their lotteryworks (i.e., lottery tickets)
We develop a progressive and unified SuperTickets identification strategy, achieving better accuracy and efficiency trade-offs than conventional sparse training.
arXiv Detail & Related papers (2022-07-08T03:44:34Z) - SNN2ANN: A Fast and Memory-Efficient Training Framework for Spiking
Neural Networks [117.56823277328803]
Spiking neural networks are efficient computation models for low-power environments.
We propose a SNN-to-ANN (SNN2ANN) framework to train the SNN in a fast and memory-efficient way.
Experiment results show that our SNN2ANN-based models perform well on the benchmark datasets.
arXiv Detail & Related papers (2022-06-19T16:52:56Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z) - SiamSNN: Siamese Spiking Neural Networks for Energy-Efficient Object
Tracking [20.595208488431766]
SiamSNN is the first deep SNN tracker that achieves short latency and low precision loss on the visual object tracking benchmarks OTB2013, VOT2016, and GOT-10k.
SiamSNN notably achieves low energy consumption and real-time on Neuromorphic chip TrueNorth.
arXiv Detail & Related papers (2020-03-17T08:49:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.