Input-Aware Dynamic Timestep Spiking Neural Networks for Efficient
In-Memory Computing
- URL: http://arxiv.org/abs/2305.17346v1
- Date: Sat, 27 May 2023 03:01:27 GMT
- Title: Input-Aware Dynamic Timestep Spiking Neural Networks for Efficient
In-Memory Computing
- Authors: Yuhang Li, Abhishek Moitra, Tamar Geller, Priyadarshini Panda
- Abstract summary: Spiking Neural Networks (SNNs) have attracted widespread research interest because of their capability to process sparse and binary spike information.
We show that the energy cost and latency of SNNs scale linearly with the number of timesteps used on IMC hardware.
We propose input-aware Dynamic Timestep SNN (DT-SNN) to maximize the efficiency of SNNs.
- Score: 7.738130109655604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking Neural Networks (SNNs) have recently attracted widespread research
interest as an efficient alternative to traditional Artificial Neural Networks
(ANNs) because of their capability to process sparse and binary spike
information and avoid expensive multiplication operations. Although the
efficiency of SNNs can be realized on the In-Memory Computing (IMC)
architecture, we show that the energy cost and latency of SNNs scale linearly
with the number of timesteps used on IMC hardware. Therefore, in order to
maximize the efficiency of SNNs, we propose input-aware Dynamic Timestep SNN
(DT-SNN), a novel algorithmic solution to dynamically determine the number of
timesteps during inference on an input-dependent basis. By calculating the
entropy of the accumulated output after each timestep, we can compare it to a
predefined threshold and decide if the information processed at the current
timestep is sufficient for a confident prediction. We deploy DT-SNN on an IMC
architecture and show that it incurs negligible computational overhead. We
demonstrate that our method only uses 1.46 average timesteps to achieve the
accuracy of a 4-timestep static SNN while reducing the energy-delay-product by
80%.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks [50.32980443749865]
Spiking neural networks (SNNs) have garnered significant attention for their low power consumption and high biologicalability.
Current SNNs struggle to balance accuracy and latency in neuromorphic datasets.
We propose Step-wise Distillation (HSD) method, tailored for neuromorphic datasets.
arXiv Detail & Related papers (2024-09-19T06:52:34Z) - Are SNNs Truly Energy-efficient? $-$ A Hardware Perspective [7.539212567508529]
Spiking Neural Networks (SNNs) have gained attention for their energy-efficient machine learning capabilities.
This work studies two hardware benchmarking platforms for large-scale SNN inference, namely SATA and SpikeSim.
arXiv Detail & Related papers (2023-09-06T22:23:22Z) - SEENN: Towards Temporal Spiking Early-Exit Neural Networks [26.405775809170308]
Spiking Neural Networks (SNNs) have recently become more popular as a biologically plausible substitute for traditional Artificial Neural Networks (ANNs)
We study a fine-grained adjustment of the number of timesteps in SNNs.
By dynamically adjusting the number of timesteps, our SEENN achieves a remarkable reduction in the average number of timesteps during inference.
arXiv Detail & Related papers (2023-04-02T15:57:09Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - SpikeSim: An end-to-end Compute-in-Memory Hardware Evaluation Tool for
Benchmarking Spiking Neural Networks [4.0300632886917]
SpikeSim is a tool that can perform realistic performance, energy, latency and area evaluation of IMC-mapped SNNs.
We propose SNN topological modifications leading to 1.24x and 10x reduction in the neuronal module's area and the overall energy-delay-product value.
arXiv Detail & Related papers (2022-10-24T01:07:17Z) - Examining the Robustness of Spiking Neural Networks on Non-ideal
Memristive Crossbars [4.184276171116354]
Spiking Neural Networks (SNNs) have emerged as the low-power alternative to Artificial Neural Networks (ANNs)
We study the effect of crossbar non-idealities and intrinsicity on the performance of SNNs.
arXiv Detail & Related papers (2022-06-20T07:07:41Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.