Energy-Efficient Deployment of Machine Learning Workloads on
Neuromorphic Hardware
- URL: http://arxiv.org/abs/2210.05006v1
- Date: Mon, 10 Oct 2022 20:27:19 GMT
- Title: Energy-Efficient Deployment of Machine Learning Workloads on
Neuromorphic Hardware
- Authors: Peyton Chandarana, Mohammadreza Mohammadi, James Seekings, Ramtin Zand
- Abstract summary: Several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs)
Spiked neural networks (SNNs) which operate on discrete time-series data have been shown to achieve substantial power reductions when deployed on specialized neuromorphic event-based/asynchronous hardware.
In this work, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware.
- Score: 0.11744028458220425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the technology industry is moving towards implementing tasks such as
natural language processing, path planning, image classification, and more on
smaller edge computing devices, the demand for more efficient implementations
of algorithms and hardware accelerators has become a significant area of
research. In recent years, several edge deep learning hardware accelerators
have been released that specifically focus on reducing the power and area
consumed by deep neural networks (DNNs). On the other hand, spiking neural
networks (SNNs) which operate on discrete time-series data, have been shown to
achieve substantial power reductions over even the aforementioned edge DNN
accelerators when deployed on specialized neuromorphic event-based/asynchronous
hardware. While neuromorphic hardware has demonstrated great potential for
accelerating deep learning tasks at the edge, the current space of algorithms
and hardware is limited and still in rather early development. Thus, many
hybrid approaches have been proposed which aim to convert pre-trained DNNs into
SNNs. In this work, we provide a general guide to converting pre-trained DNNs
into SNNs while also presenting techniques to improve the deployment of
converted SNNs on neuromorphic hardware with respect to latency, power, and
energy. Our experimental results show that when compared against the Intel
Neural Compute Stick 2, Intel's neuromorphic processor, Loihi, consumes up to
27x less power and 5x less energy in the tested image classification tasks by
using our SNN improvement techniques.
Related papers
- Towards Scalable GPU-Accelerated SNN Training via Temporal Fusion [8.995682796140429]
Spiking Neural Networks (SNNs) emerge as a transformative development in artificial intelligence.
SNNs show promising efficiency on specialized sparse-computational hardware, but their practical training often relies on conventional GPU.
We present a novel temporal fusion method, specifically designed to expedite the propagation dynamics of SNNs on GPU platforms.
arXiv Detail & Related papers (2024-08-01T04:41:56Z) - Detection of Fast-Moving Objects with Neuromorphic Hardware [12.323012135924374]
Spiking Neural Networks (SNNs) are often viewed as the next generation of Neural Networks (NNs)
Neuromorphic Computing (NC) and SNNs in particular are often viewed as the next generation of Neural Networks (NNs)
arXiv Detail & Related papers (2024-03-15T20:53:10Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Are SNNs Truly Energy-efficient? $-$ A Hardware Perspective [7.539212567508529]
Spiking Neural Networks (SNNs) have gained attention for their energy-efficient machine learning capabilities.
This work studies two hardware benchmarking platforms for large-scale SNN inference, namely SATA and SpikeSim.
arXiv Detail & Related papers (2023-09-06T22:23:22Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z) - Accelerating spiking neural network training [1.6114012813668934]
Spiking neural networks (SNN) are a type of artificial network inspired by the use of action potentials in the brain.
We propose a new technique for directly training single-spike-per-neur-on SNNs which eliminates all sequential computation and relies exclusively on vectorised operations.
Our proposed solution manages to solve certain tasks with over a $95.68 %$ reduction in spike counts relative to a conventionally trained SNN, which could significantly reduce energy requirements when deployed on neuromorphic computers.
arXiv Detail & Related papers (2022-05-30T17:48:14Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.