Benchmarking energy consumption and latency for neuromorphic computing
in condensed matter and particle physics
- URL: http://arxiv.org/abs/2209.10481v1
- Date: Wed, 21 Sep 2022 16:33:44 GMT
- Title: Benchmarking energy consumption and latency for neuromorphic computing
in condensed matter and particle physics
- Authors: Dominique J. K\"osters, Bryan A. Kortman, Irem Boybat, Elena Ferro,
Sagar Dolas, Roberto de Austri, Johan Kwisthout, Hans Hilgenkamp, Theo
Rasing, Heike Riel, Abu Sebastian, Sascha Caron and Johan H. Mentink
- Abstract summary: We present a methodology for measuring the energy cost and compute time for inference tasks with artificial neural networks (ANNs) on conventional hardware.
We estimate the same metrics based on a state-of-the-art analog in-memory computing platform, one of the key paradigms in neuromorphic computing.
We find that AIMC can achieve up to one order of magnitude shorter times than conventional hardware, at an energy cost that is up to three orders of magnitude smaller.
- Score: 0.309894133212992
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The massive use of artificial neural networks (ANNs), increasingly popular in
many areas of scientific computing, rapidly increases the energy consumption of
modern high-performance computing systems. An appealing and possibly more
sustainable alternative is provided by novel neuromorphic paradigms, which
directly implement ANNs in hardware. However, little is known about the actual
benefits of running ANNs on neuromorphic hardware for use cases in scientific
computing. Here we present a methodology for measuring the energy cost and
compute time for inference tasks with ANNs on conventional hardware. In
addition, we have designed an architecture for these tasks and estimate the
same metrics based on a state-of-the-art analog in-memory computing (AIMC)
platform, one of the key paradigms in neuromorphic computing. Both
methodologies are compared for a use case in quantum many-body physics in two
dimensional condensed matter systems and for anomaly detection at 40 MHz rates
at the Large Hadron Collider in particle physics. We find that AIMC can achieve
up to one order of magnitude shorter computation times than conventional
hardware, at an energy cost that is up to three orders of magnitude smaller.
This suggests great potential for faster and more sustainable scientific
computing with neuromorphic hardware.
Related papers
- Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Detection of Fast-Moving Objects with Neuromorphic Hardware [12.323012135924374]
Spiking Neural Networks (SNNs) are often viewed as the next generation of Neural Networks (NNs)
Neuromorphic Computing (NC) and SNNs in particular are often viewed as the next generation of Neural Networks (NNs)
arXiv Detail & Related papers (2024-03-15T20:53:10Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Pruning random resistive memory for optimizing analogue AI [54.21621702814583]
AI models present unprecedented challenges to energy consumption and environmental sustainability.
One promising solution is to revisit analogue computing, a technique that predates digital computing.
Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning.
arXiv Detail & Related papers (2023-11-13T08:59:01Z) - To Spike or Not To Spike: A Digital Hardware Perspective on Deep
Learning Acceleration [4.712922151067433]
As deep learning models scale, they become increasingly competitive from domains spanning from computer vision to natural language processing.
The power efficiency of the biological brain outperforms any large-scale deep learning ( DL ) model.
Neuromorphic computing tries to mimic the brain operations to improve the efficiency of DL models.
arXiv Detail & Related papers (2023-06-27T19:04:00Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - Cryogenic Neuromorphic Hardware [5.399870108760824]
The concept of implementing neuromorphic computing systems in cryogenic temperature has garnered immense attention.
Here we provide a comprehensive overview of the reported cryogenic neuromorphic hardware.
arXiv Detail & Related papers (2022-03-25T20:44:02Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - High-Speed CMOS-Free Purely Spintronic Asynchronous Recurrent Neural
Network [1.1965429476528429]
Neuromorphic computing systems overcome the limitations of traditional von Neumann computing architectures.
Recent research has demonstrated memristors and spintronic devices in various neural network designs boost efficiency and speed.
This paper presents a biologically inspired fully spintronic neuron used in a fully spintronic Hopfield RNN.
arXiv Detail & Related papers (2021-07-05T19:23:33Z) - Neuromorphic Nearest-Neighbor Search Using Intel's Pohoiki Springs [3.571324984762197]
In the brain, billions of interconnected neurons perform rapid computations at extremely low energy levels.
Here, we showcase the Pohoiki Springs neuromorphic system, a mesh of 768 interconnected Loihi chips that collectively implement 100 million spiking neurons in silicon.
We demonstrate a scalable approximate k-nearest neighbor (k-NN) algorithm for searching large databases that exploits neuromorphic principles.
arXiv Detail & Related papers (2020-04-27T10:23:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.