SATA: Sparsity-Aware Training Accelerator for Spiking Neural Networks
- URL: http://arxiv.org/abs/2204.05422v3
- Date: Mon, 19 Dec 2022 18:07:01 GMT
- Title: SATA: Sparsity-Aware Training Accelerator for Spiking Neural Networks
- Authors: Ruokai Yin, Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim,
Priyadarshini Panda
- Abstract summary: Spiking Neural Networks (SNNs) have gained huge attention as a potential energy-efficient alternative to conventional Artificial Neural Networks (ANNs)
We introduce SATA (Sparsity-Aware Training Accelerator), a BPTT-based training accelerator for SNNs.
By utilizing the sparsity, SATA increases its computation energy efficiency by $5.58 times$ compared to the one without using sparsity.
- Score: 4.44525458129903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) have gained huge attention as a potential
energy-efficient alternative to conventional Artificial Neural Networks (ANNs)
due to their inherent high-sparsity activation. Recently, SNNs with
backpropagation through time (BPTT) have achieved a higher accuracy result on
image recognition tasks than other SNN training algorithms. Despite the success
from the algorithm perspective, prior works neglect the evaluation of the
hardware energy overheads of BPTT due to the lack of a hardware evaluation
platform for this SNN training algorithm. Moreover, although SNNs have long
been seen as an energy-efficient counterpart of ANNs, a quantitative comparison
between the training cost of SNNs and ANNs is missing. To address the
aforementioned issues, in this work, we introduce SATA (Sparsity-Aware Training
Accelerator), a BPTT-based training accelerator for SNNs. The proposed SATA
provides a simple and re-configurable systolic-based accelerator architecture,
which makes it easy to analyze the training energy for BPTT-based SNN training
algorithms. By utilizing the sparsity, SATA increases its computation energy
efficiency by $5.58 \times$ compared to the one without using sparsity. Based
on SATA, we show quantitative analyses of the energy efficiency of SNN training
and compare the training cost of SNNs and ANNs. The results show that, on
Eyeriss-like systolic-based architecture, SNNs consume $1.27\times$ more total
energy with sparsities when compared to ANNs. We find that such high training
energy cost is from time-repetitive convolution operations and data movements
during backpropagation. Moreover, to propel the future SNN training algorithm
design, we provide several observations on energy efficiency for different
SNN-specific training parameters and propose an energy estimation framework for
SNN training. Code for our framework is made publicly available.
Related papers
- Training-free Conversion of Pretrained ANNs to SNNs for Low-Power and High-Performance Applications [23.502136316777058]
Spiking Neural Networks (SNNs) have emerged as a promising substitute for Artificial Neural Networks (ANNs)
Existing supervised learning algorithms for SNNs require significantly more memory and time than their ANN counterparts.
Our approach directly converts pre-trained ANN models into high-performance SNNs without additional training.
arXiv Detail & Related papers (2024-09-05T09:14:44Z) - When Bio-Inspired Computing meets Deep Learning: Low-Latency, Accurate,
& Energy-Efficient Spiking Neural Networks from Artificial Neural Networks [22.721987637571306]
Spiking Neural Networks (SNNs) are demonstrating comparable accuracy to convolutional neural networks (CNN)
ANN-to-SNN conversion has recently gained significant traction in developing deep SNNs with close to state-of-the-art (SOTA) test accuracy on complex image recognition tasks.
We propose a novel ANN-to-SNN conversion framework, that incurs an exponentially lower number of time steps compared to that required in the SOTA conversion approaches.
arXiv Detail & Related papers (2023-12-12T00:10:45Z) - Is Conventional SNN Really Efficient? A Perspective from Network
Quantization [7.04833025737147]
Spiking Neural Networks (SNNs) have been widely praised for their high energy efficiency and immense potential.
However, comprehensive research that critically contrasts and correlates SNNs with quantized Artificial Neural Networks (ANNs) remains scant.
This paper introduces a unified perspective, illustrating that the time steps in SNNs and quantized bit-widths of activation values present analogous representations.
arXiv Detail & Related papers (2023-11-17T09:48:22Z) - LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - Are SNNs Truly Energy-efficient? $-$ A Hardware Perspective [7.539212567508529]
Spiking Neural Networks (SNNs) have gained attention for their energy-efficient machine learning capabilities.
This work studies two hardware benchmarking platforms for large-scale SNN inference, namely SATA and SpikeSim.
arXiv Detail & Related papers (2023-09-06T22:23:22Z) - LaSNN: Layer-wise ANN-to-SNN Distillation for Effective and Efficient
Training in Deep Spiking Neural Networks [7.0691139514420005]
Spiking Neural Networks (SNNs) are biologically realistic and practically promising in low-power because of their event-driven mechanism.
A conversion scheme is proposed to obtain competitive accuracy by mapping trained ANNs' parameters to SNNs with the same structures.
A novel SNN training framework is proposed, namely layer-wise ANN-to-SNN knowledge distillation (LaSNN)
arXiv Detail & Related papers (2023-04-17T03:49:35Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - SNN2ANN: A Fast and Memory-Efficient Training Framework for Spiking
Neural Networks [117.56823277328803]
Spiking neural networks are efficient computation models for low-power environments.
We propose a SNN-to-ANN (SNN2ANN) framework to train the SNN in a fast and memory-efficient way.
Experiment results show that our SNN2ANN-based models perform well on the benchmark datasets.
arXiv Detail & Related papers (2022-06-19T16:52:56Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.