H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking
Neural Networks
- URL: http://arxiv.org/abs/2107.11746v1
- Date: Sun, 25 Jul 2021 07:37:17 GMT
- Title: H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking
Neural Networks
- Authors: Ling Liang, Zheng Qu, Zhaodong Chen, Fengbin Tu, Yujie Wu, Lei Deng,
Guoqi Li, Peng Li, Yuan Xie
- Abstract summary: We propose H2Learn, a novel architecture that can achieve high efficiency for BPTT-based SNN learning.
Compared with the modern NVIDIA V100 GPU, H2Learn achieves 7.38x area saving, 5.74-10.20x speedup, and 5.25-7.12x energy saving on several benchmark datasets.
- Score: 25.768116231283045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although spiking neural networks (SNNs) take benefits from the bio-plausible
neural modeling, the low accuracy under the common local synaptic plasticity
learning rules limits their application in many practical tasks. Recently, an
emerging SNN supervised learning algorithm inspired by backpropagation through
time (BPTT) from the domain of artificial neural networks (ANNs) has
successfully boosted the accuracy of SNNs and helped improve the practicability
of SNNs. However, current general-purpose processors suffer from low efficiency
when performing BPTT for SNNs due to the ANN-tailored optimization. On the
other hand, current neuromorphic chips cannot support BPTT because they mainly
adopt local synaptic plasticity rules for simplified implementation.
In this work, we propose H2Learn, a novel architecture that can achieve high
efficiency for BPTT-based SNN learning which ensures high accuracy of SNNs. At
the beginning, we characterized the behaviors of BPTT-based SNN learning.
Benefited from the binary spike-based computation in the forward pass and the
weight update, we first design lookup table (LUT) based processing elements in
Forward Engine and Weight Update Engine to make accumulations implicit and to
fuse the computations of multiple input points. Second, benefited from the rich
sparsity in the backward pass, we design a dual-sparsity-aware Backward Engine
which exploits both input and output sparsity. Finally, we apply a pipeline
optimization between different engines to build an end-to-end solution for the
BPTT-based SNN learning. Compared with the modern NVIDIA V100 GPU, H2Learn
achieves 7.38x area saving, 5.74-10.20x speedup, and 5.25-7.12x energy saving
on several benchmark datasets.
Related papers
- SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Highly Efficient SNNs for High-speed Object Detection [7.3074002563489024]
Experimental results show that our efficient SNN can achieve 118X speedup on GPU with only 1.5MB parameters for object detection tasks.
We further verify our SNN on FPGA platform and the proposed model can achieve 800+FPS object detection with extremely low latency.
arXiv Detail & Related papers (2023-09-27T10:31:12Z) - High-performance deep spiking neural networks with 0.3 spikes per neuron [9.01407445068455]
It is hard to train biologically-inspired spiking neural networks (SNNs) than artificial neural networks (ANNs)
We show that training deep SNN models achieves the exact same performance as that of ANNs.
Our SNN accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation.
arXiv Detail & Related papers (2023-06-14T21:01:35Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - SpinAPS: A High-Performance Spintronic Accelerator for Probabilistic
Spiking Neural Networks [31.3159725020842]
"SpinAPS" for Spintronic Accelerator for Probabilistic SNNs implements a principled direct learning rule for first-to-spike decoding.
The proposed solution is shown to achieve comparable performance with an equivalent ANN on handwritten digit and human activity recognition benchmarks.
arXiv Detail & Related papers (2020-08-05T15:37:47Z) - FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks [14.916996986290902]
Spiking Neural Networks (SNNs) offer unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule.
However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy.
We propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing.
arXiv Detail & Related papers (2020-07-17T09:40:26Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - T2FSNN: Deep Spiking Neural Networks with Time-to-first-spike Coding [26.654533157221973]
This paper introduces the concept of time-to-first-spike coding into deep SNNs using the kernel-based dynamic threshold and dendrite to overcome the drawback.
According to our results, the proposed methods can reduce inference latency and number of spikes to 22% and less than 1%, compared to those of burst coding.
arXiv Detail & Related papers (2020-03-26T04:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.