SpinAPS: A High-Performance Spintronic Accelerator for Probabilistic
Spiking Neural Networks
- URL: http://arxiv.org/abs/2008.02189v1
- Date: Wed, 5 Aug 2020 15:37:47 GMT
- Title: SpinAPS: A High-Performance Spintronic Accelerator for Probabilistic
Spiking Neural Networks
- Authors: Anakha V Babu, Osvaldo Simeone, Bipin Rajendran
- Abstract summary: "SpinAPS" for Spintronic Accelerator for Probabilistic SNNs implements a principled direct learning rule for first-to-spike decoding.
The proposed solution is shown to achieve comparable performance with an equivalent ANN on handwritten digit and human activity recognition benchmarks.
- Score: 31.3159725020842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We discuss a high-performance and high-throughput hardware accelerator for
probabilistic Spiking Neural Networks (SNNs) based on Generalized Linear Model
(GLM) neurons, that uses binary STT-RAM devices as synapses and digital CMOS
logic for neurons. The inference accelerator, termed "SpinAPS" for Spintronic
Accelerator for Probabilistic SNNs, implements a principled direct learning
rule for first-to-spike decoding without the need for conversion from
pre-trained ANNs. The proposed solution is shown to achieve comparable
performance with an equivalent ANN on handwritten digit and human activity
recognition benchmarks. The inference engine, SpinAPS, is shown through
software emulation tools to achieve 4x performance improvement in terms of
GSOPS/W/mm2 when compared to an equivalent SRAM-based design. The architecture
leverages probabilistic spiking neural networks that employ first-to-spike
decoding rule to make inference decisions at low latencies, achieving 75% of
the test performance in as few as 4 algorithmic time steps on the handwritten
digit benchmark. The accelerator also exhibits competitive performance with
other memristor-based DNN/SNN accelerators and state-of-the-art GPUs.
Related papers
- Hardware-Software Co-optimised Fast and Accurate Deep Reconfigurable Spiking Inference Accelerator Architecture Design Methodology [2.968768532937366]
Spiking Neural Networks (SNNs) have emerged as a promising approach to improve the energy efficiency of machine learning models.
We develop a hardware-software co-optimisation strategy to port software-trained deep neural networks (DNN) to reduced-precision spiking models.
arXiv Detail & Related papers (2024-10-07T05:04:13Z) - Spiker+: a framework for the generation of efficient Spiking Neural
Networks FPGA accelerators for inference at the edge [49.42371633618761]
Spiker+ is a framework for generating efficient, low-power, and low-area customized Spiking Neural Networks (SNN) accelerators on FPGA for inference at the edge.
Spiker+ is tested on two benchmark datasets, the MNIST and the Spiking Heidelberg Digits (SHD)
arXiv Detail & Related papers (2024-01-02T10:42:42Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Automotive Object Detection via Learning Sparse Events by Spiking Neurons [20.930277906912394]
Spiking Neural Networks (SNNs) provide a temporal representation that is inherently aligned with event-based data.
We present a specialized spiking feature pyramid network (SpikeFPN) optimized for automotive event-based object detection.
arXiv Detail & Related papers (2023-07-24T15:47:21Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge [87.41163540910854]
Deep neural network (DNN) latency characterization is a time-consuming process.
We propose MAPLE-X which extends MAPLE by incorporating explicit prior knowledge of hardware devices and DNN architecture latency.
arXiv Detail & Related papers (2022-05-25T11:08:20Z) - DNN Training Acceleration via Exploring GPGPU Friendly Sparsity [16.406482603838157]
We propose the Approximate Random Dropout that replaces the conventional random dropout of neurons and synapses with a regular and online generated row-based or tile-based dropout patterns.
We then develop a SGD-based Search Algorithm that produces the distribution of row-based or tile-based dropout patterns to compensate for the potential accuracy loss.
We also propose the sensitivity-aware dropout method to dynamically drop the input feature maps based on their sensitivity so as to achieve greater forward and backward training acceleration.
arXiv Detail & Related papers (2022-03-11T01:32:03Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking
Neural Networks [25.768116231283045]
We propose H2Learn, a novel architecture that can achieve high efficiency for BPTT-based SNN learning.
Compared with the modern NVIDIA V100 GPU, H2Learn achieves 7.38x area saving, 5.74-10.20x speedup, and 5.25-7.12x energy saving on several benchmark datasets.
arXiv Detail & Related papers (2021-07-25T07:37:17Z) - Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch [75.69506249886622]
Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the models on resource-constrained environments.
In this paper, we are the first to study training from scratch an N:M fine-grained structured sparse network.
arXiv Detail & Related papers (2021-02-08T05:55:47Z) - Compiling Spiking Neural Networks to Neuromorphic Hardware [4.273223677453178]
Spiking Neural Network (SNN) can lower the energy consumption of machine learning applications executed on neuromorphic hardware.
We propose an approach to analyze and compile SNNs on a resource-constrained neuromorphic hardware.
arXiv Detail & Related papers (2020-04-07T21:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.