Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware
- URL: http://arxiv.org/abs/2004.01656v3
- Date: Mon, 26 Oct 2020 21:20:31 GMT
- Title: Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware
- Authors: Christoph Ostrau, Jonas Homburg, Christian Klarhorst, Michael Thies,
Ulrich R\"uckert
- Abstract summary: We use the methodology of converting pre-trained non-spiking to spiking neural networks to evaluate the performance loss and measure the energy-per-inference.
We demonstrate that the conversion loss is usually below one percent for digital implementations, and moderately higher for analog systems with the benefit of much lower energy-per-inference costs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With more and more event-based neuromorphic hardware systems being developed
at universities and in industry, there is a growing need for assessing their
performance with domain specific measures. In this work, we use the methodology
of converting pre-trained non-spiking to spiking neural networks to evaluate
the performance loss and measure the energy-per-inference for three
neuromorphic hardware systems (BrainScaleS, Spikey, SpiNNaker) and common
simulation frameworks for CPU (NEST) and CPU/GPU (GeNN). For analog hardware we
further apply a re-training technique known as hardware-in-the-loop training to
cope with device mismatch. This analysis is performed for five different
networks, including three networks that have been found by an automated
optimization with a neural architecture search framework. We demonstrate that
the conversion loss is usually below one percent for digital implementations,
and moderately higher for analog systems with the benefit of much lower
energy-per-inference costs.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Quantization-aware Neural Architectural Search for Intrusion Detection [5.010685611319813]
We present a design methodology that automatically trains and evolves quantized neural network (NN) models that are a thousand times smaller than state-of-the-art NNs.
The number of LUTs utilized by this network when deployed to an FPGA is between 2.3x and 8.5x smaller with performance comparable to prior work.
arXiv Detail & Related papers (2023-11-07T18:35:29Z) - Synaptic metaplasticity with multi-level memristive devices [1.5598974049838272]
We propose a memristor-based hardware solution for implementing metaplasticity during both inference and training.
We show that a two-layer perceptron achieves 97% and 86% accuracy on consecutive training of MNIST and Fashion-MNIST.
Our architecture is compatible with the memristor limited endurance and has a 15x reduction in memory.
arXiv Detail & Related papers (2023-06-21T09:40:25Z) - Evolving Connectivity for Recurrent Spiking Neural Networks [8.80300633999542]
Recurrent neural networks (RSNNs) hold great potential for advancing artificial general intelligence.
We propose the evolving connectivity (EC) framework, an inference-only method for training RSNNs.
arXiv Detail & Related papers (2023-05-28T07:08:25Z) - Gradient-descent hardware-aware training and deployment for mixed-signal
Neuromorphic processors [2.812395851874055]
Mixed-signal neuromorphic processors provide extremely low-power operation for edge inference workloads.
We demonstrate a novel methodology for ofDine training and deployment of spiking neural networks (SNNs) to the mixed-signal neuromorphic processor DYNAP-SE2.
arXiv Detail & Related papers (2023-03-14T08:56:54Z) - ETLP: Event-based Three-factor Local Plasticity for online learning with
neuromorphic hardware [105.54048699217668]
We show a competitive performance in accuracy with a clear advantage in the computational complexity for Event-Based Three-factor Local Plasticity (ETLP)
We also show that when using local plasticity, threshold adaptation in spiking neurons and a recurrent topology are necessary to learntemporal patterns with a rich temporal structure.
arXiv Detail & Related papers (2023-01-19T19:45:42Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Ps and Qs: Quantization-aware pruning for efficient low latency neural
network inference [56.24109486973292]
We study the interplay between pruning and quantization during the training of neural networks for ultra low latency applications.
We find that quantization-aware pruning yields more computationally efficient models than either pruning or quantization alone for our task.
arXiv Detail & Related papers (2021-02-22T19:00:05Z) - Surrogate gradients for analog neuromorphic computing [2.6475944316982942]
We show that learning self-corrects for device mismatch resulting in competitive spiking network performance on vision and speech benchmarks.
Our work sets several new benchmarks for low-energy spiking network processing on analog neuromorphic hardware.
arXiv Detail & Related papers (2020-06-12T14:45:12Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.