A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding
- URL: http://arxiv.org/abs/2206.02495v1
- Date: Mon, 6 Jun 2022 10:56:25 GMT
- Title: A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding
- Authors: Daniel Gerlinghoff, Zhehui Wang, Xiaozhe Gu, Rick Siow Mong Goh, Tao
Luo
- Abstract summary: Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
- Score: 6.047137174639418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks (SNNs) recently gained momentum due to their
low-power multiplication-free computing and the closer resemblance of
biological processes in the nervous system of humans. However, SNNs require
very long spike trains (up to 1000) to reach an accuracy similar to their
artificial neural network (ANN) counterparts for large models, which offsets
efficiency and inhibits its application to low-power systems for real-world use
cases. To alleviate this problem, emerging neural encoding schemes are proposed
to shorten the spike train while maintaining the high accuracy. However,
current accelerators for SNN cannot well support the emerging encoding schemes.
In this work, we present a novel hardware architecture that can efficiently
support SNN with emerging neural encoding. Our implementation features energy
and area efficient processing units with increased parallelism and reduced
memory accesses. We verified the accelerator on FPGA and achieve 25% and 90%
improvement over previous work in power consumption and latency, respectively.
At the same time, high area efficiency allows us to scale for large neural
network models. To the best of our knowledge, this is the first work to deploy
the large neural network model VGG on physical FPGA-based neuromorphic
hardware.
Related papers
- Decoding finger velocity from cortical spike trains with recurrent spiking neural networks [6.404492073110551]
Invasive brain-machine interfaces (BMIs) can significantly improve the life quality of motor-impaired patients.
BMIs must meet strict latency and energy constraints while providing reliable decoding performance.
We trained RSNNs to decode finger velocity from cortical spike trains of two macaque monkeys.
arXiv Detail & Related papers (2024-09-03T10:15:33Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Low Precision Quantization-aware Training in Spiking Neural Networks
with Differentiable Quantization Function [0.5046831208137847]
This work aims to bridge the gap between recent progress in quantized neural networks and spiking neural networks.
It presents an extensive study on the performance of the quantization function, represented as a linear combination of sigmoid functions.
The presented quantization function demonstrates the state-of-the-art performance on four popular benchmarks.
arXiv Detail & Related papers (2023-05-30T09:42:05Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Energy-Efficient Deployment of Machine Learning Workloads on
Neuromorphic Hardware [0.11744028458220425]
Several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs)
Spiked neural networks (SNNs) which operate on discrete time-series data have been shown to achieve substantial power reductions when deployed on specialized neuromorphic event-based/asynchronous hardware.
In this work, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware.
arXiv Detail & Related papers (2022-10-10T20:27:19Z) - Accelerating spiking neural network training [1.6114012813668934]
Spiking neural networks (SNN) are a type of artificial network inspired by the use of action potentials in the brain.
We propose a new technique for directly training single-spike-per-neur-on SNNs which eliminates all sequential computation and relies exclusively on vectorised operations.
Our proposed solution manages to solve certain tasks with over a $95.68 %$ reduction in spike counts relative to a conventionally trained SNN, which could significantly reduce energy requirements when deployed on neuromorphic computers.
arXiv Detail & Related papers (2022-05-30T17:48:14Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - E3NE: An End-to-End Framework for Accelerating Spiking Neural Networks
with Emerging Neural Encoding on FPGAs [6.047137174639418]
End-to-end framework E3NE automates the generation of efficient SNN inference logic for FPGAs.
E3NE uses less than 50% of hardware resources and 20% less power, while reducing the latency by an order of magnitude.
arXiv Detail & Related papers (2021-11-19T04:01:19Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.