SEENN: Towards Temporal Spiking Early-Exit Neural Networks
- URL: http://arxiv.org/abs/2304.01230v2
- Date: Sun, 1 Oct 2023 21:35:07 GMT
- Title: SEENN: Towards Temporal Spiking Early-Exit Neural Networks
- Authors: Yuhang Li, Tamar Geller, Youngeun Kim, Priyadarshini Panda
- Abstract summary: Spiking Neural Networks (SNNs) have recently become more popular as a biologically plausible substitute for traditional Artificial Neural Networks (ANNs)
We study a fine-grained adjustment of the number of timesteps in SNNs.
By dynamically adjusting the number of timesteps, our SEENN achieves a remarkable reduction in the average number of timesteps during inference.
- Score: 26.405775809170308
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Spiking Neural Networks (SNNs) have recently become more popular as a
biologically plausible substitute for traditional Artificial Neural Networks
(ANNs). SNNs are cost-efficient and deployment-friendly because they process
input in both spatial and temporal manner using binary spikes. However, we
observe that the information capacity in SNNs is affected by the number of
timesteps, leading to an accuracy-efficiency tradeoff. In this work, we study a
fine-grained adjustment of the number of timesteps in SNNs. Specifically, we
treat the number of timesteps as a variable conditioned on different input
samples to reduce redundant timesteps for certain data. We call our method
Spiking Early-Exit Neural Networks (SEENNs). To determine the appropriate
number of timesteps, we propose SEENN-I which uses a confidence score
thresholding to filter out the uncertain predictions, and SEENN-II which
determines the number of timesteps by reinforcement learning. Moreover, we
demonstrate that SEENN is compatible with both the directly trained SNN and the
ANN-SNN conversion. By dynamically adjusting the number of timesteps, our SEENN
achieves a remarkable reduction in the average number of timesteps during
inference. For example, our SEENN-II ResNet-19 can achieve 96.1% accuracy with
an average of 1.08 timesteps on the CIFAR-10 test dataset. Code is shared at
https://github.com/Intelligent-Computing-Lab-Yale/SEENN.
Related papers
- Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks [50.32980443749865]
Spiking neural networks (SNNs) have garnered significant attention for their low power consumption and high biologicalability.
Current SNNs struggle to balance accuracy and latency in neuromorphic datasets.
We propose Step-wise Distillation (HSD) method, tailored for neuromorphic datasets.
arXiv Detail & Related papers (2024-09-19T06:52:34Z) - Optimal ANN-SNN Conversion with Group Neurons [39.14228133571838]
Spiking Neural Networks (SNNs) have emerged as a promising third generation of neural networks.
The lack of effective learning algorithms remains a challenge for SNNs.
We introduce a novel type of neuron called Group Neurons (GNs)
arXiv Detail & Related papers (2024-02-29T11:41:12Z) - Bridging the Gap between ANNs and SNNs by Calibrating Offset Spikes [19.85338979292052]
Spiking Neural Networks (SNNs) have attracted great attention due to their distinctive characteristics of low power consumption and temporal information processing.
ANN-SNN conversion, as the most commonly used training method for applying SNNs, can ensure that converted SNNs achieve comparable performance to ANNs on large-scale datasets.
In this paper, instead of evaluating different conversion errors and then eliminating these errors, we define an offset spike to measure the degree of deviation between actual and desired SNN firing rates.
arXiv Detail & Related papers (2023-02-21T14:10:56Z) - A noise based novel strategy for faster SNN training [0.0]
Spiking neural networks (SNNs) are receiving increasing attention due to their low power consumption and strong bio-plausibility.
Two main methods, artificial neural network (ANN)-to-SNN conversion and spike-based backpropagation (BP), both have their advantages and limitations.
We propose a novel SNN training approach that combines the benefits of the two methods.
arXiv Detail & Related papers (2022-11-10T09:59:04Z) - Low Latency Conversion of Artificial Neural Network Models to
Rate-encoded Spiking Neural Networks [11.300257721586432]
Spiking neural networks (SNNs) are well suited for resource-constrained applications.
In a typical rate-encoded SNN, a series of binary spikes within a globally fixed time window is used to fire the neurons.
The aim of this paper is to reduce this while maintaining accuracy when converting ANNs to their equivalent SNNs.
arXiv Detail & Related papers (2022-10-27T08:13:20Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Deep Time Delay Neural Network for Speech Enhancement with Full Data
Learning [60.20150317299749]
This paper proposes a deep time delay neural network (TDNN) for speech enhancement with full data learning.
To make full use of the training data, we propose a full data learning method for speech enhancement.
arXiv Detail & Related papers (2020-11-11T06:32:37Z) - FATNN: Fast and Accurate Ternary Neural Networks [89.07796377047619]
Ternary Neural Networks (TNNs) have received much attention due to being potentially orders of magnitude faster in inference, as well as more power efficient, than full-precision counterparts.
In this work, we show that, under some mild constraints, computational complexity of the ternary inner product can be reduced by a factor of 2.
We elaborately design an implementation-dependent ternary quantization algorithm to mitigate the performance gap.
arXiv Detail & Related papers (2020-08-12T04:26:18Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.