A Time-to-first-spike Coding and Conversion Aware Training for
Energy-Efficient Deep Spiking Neural Network Processor Design
- URL: http://arxiv.org/abs/2208.04494v1
- Date: Tue, 9 Aug 2022 01:46:46 GMT
- Title: A Time-to-first-spike Coding and Conversion Aware Training for
Energy-Efficient Deep Spiking Neural Network Processor Design
- Authors: Dongwoo Lew, Kyungchul Lee, and Jongsun Park
- Abstract summary: We propose a conversion aware training (CAT) to reduce ANN-to-SNN conversion loss without hardware implementation overhead.
We also present a time-to-first-spike coding that allows lightweight logarithmic by utilizing spike time information.
The computation processor achieves the top-1 accuracies of 91.7%, 67.9% and 57.4% with inference energy of 486.7uJ, 503.6uJ, and 1426uJ.
- Score: 2.850312625505125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present an energy-efficient SNN architecture, which can
seamlessly run deep spiking neural networks (SNNs) with improved accuracy.
First, we propose a conversion aware training (CAT) to reduce ANN-to-SNN
conversion loss without hardware implementation overhead. In the proposed CAT,
the activation function developed for simulating SNN during ANN training, is
efficiently exploited to reduce the data representation error after conversion.
Based on the CAT technique, we also present a time-to-first-spike coding that
allows lightweight logarithmic computation by utilizing spike time information.
The SNN processor design that supports the proposed techniques has been
implemented using 28nm CMOS process. The processor achieves the top-1
accuracies of 91.7%, 67.9% and 57.4% with inference energy of 486.7uJ, 503.6uJ,
and 1426uJ to process CIFAR-10, CIFAR-100, and Tiny-ImageNet, respectively,
when running VGG-16 with 5bit logarithmic weights.
Related papers
- AT-SNN: Adaptive Tokens for Vision Transformer on Spiking Neural Network [4.525951256256855]
AT-SNN is designed to dynamically adjust the number of tokens processed during inference in SNN-based ViTs with direct training.
We show the effectiveness of AT-SNN in achieving high energy efficiency and accuracy compared to state-of-the-art approaches on the image classification tasks.
arXiv Detail & Related papers (2024-08-22T11:06:18Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Optimized Potential Initialization for Low-latency Spiking Neural
Networks [21.688402090967497]
Spiking Neural Networks (SNNs) have been attached great importance due to the distinctive properties of low power consumption, biological plausibility, and adversarial robustness.
The most effective way to train deep SNNs is through ANN-to-SNN conversion, which have yielded the best performance in deep network structure and large-scale datasets.
In this paper, we aim to achieve high-performance converted SNNs with extremely low latency (fewer than 32 time-steps)
arXiv Detail & Related papers (2022-02-03T07:15:43Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - ActNN: Reducing Training Memory Footprint via 2-Bit Activation
Compressed Training [68.63354877166756]
ActNN is a memory-efficient training framework that stores randomly quantized activations for back propagation.
ActNN reduces the memory footprint of the activation by 12x, and it enables training with a 6.6x to 14x larger batch size.
arXiv Detail & Related papers (2021-04-29T05:50:54Z) - A Little Energy Goes a Long Way: Energy-Efficient, Accurate Conversion
from Convolutional Neural Networks to Spiking Neural Networks [22.60412330785997]
Spiking neural networks (SNNs) offer an inherent ability to process spatial-temporal data, or in other words, realworld sensory data.
A major thread of research on SNNs is on converting a pre-trained convolutional neural network (CNN) to an SNN of the same structure.
We propose a novel CNN-to-SNN conversion method that is able to use a reasonably short spike train to achieve the near-zero accuracy loss.
arXiv Detail & Related papers (2021-03-01T12:15:29Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z) - Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike
Timing Dependent Backpropagation [10.972663738092063]
Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes)
We present a computationally-efficient training technique for deep SNNs.
We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time steps, which is 10X faster compared to converted SNNs with similar accuracy.
arXiv Detail & Related papers (2020-05-04T19:30:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.