Surrogate gradients for analog neuromorphic computing
- URL: http://arxiv.org/abs/2006.07239v3
- Date: Thu, 20 May 2021 14:13:26 GMT
- Title: Surrogate gradients for analog neuromorphic computing
- Authors: Benjamin Cramer, Sebastian Billaudelle, Simeon Kanya, Aron Leibfried,
Andreas Gr\"ubl, Vitali Karasenko, Christian Pehle, Korbinian Schreiber,
Yannik Stradmann, Johannes Weis, Johannes Schemmel, Friedemann Zenke
- Abstract summary: We show that learning self-corrects for device mismatch resulting in competitive spiking network performance on vision and speech benchmarks.
Our work sets several new benchmarks for low-energy spiking network processing on analog neuromorphic hardware.
- Score: 2.6475944316982942
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To rapidly process temporal information at a low metabolic cost, biological
neurons integrate inputs as an analog sum but communicate with spikes, binary
events in time. Analog neuromorphic hardware uses the same principles to
emulate spiking neural networks with exceptional energy-efficiency. However,
instantiating high-performing spiking networks on such hardware remains a
significant challenge due to device mismatch and the lack of efficient training
algorithms. Here, we introduce a general in-the-loop learning framework based
on surrogate gradients that resolves these issues. Using the BrainScaleS-2
neuromorphic system, we show that learning self-corrects for device mismatch
resulting in competitive spiking network performance on both vision and speech
benchmarks. Our networks display sparse spiking activity with, on average, far
less than one spike per hidden neuron and input, perform inference at rates of
up to 85 k frames/second, and consume less than 200 mW. In summary, our work
sets several new benchmarks for low-energy spiking network processing on analog
neuromorphic hardware and paves the way for future on-chip learning algorithms.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Low-power event-based face detection with asynchronous neuromorphic
hardware [2.0774873363739985]
We present the first instance of an on-chip spiking neural network for event-based face detection deployed on the SynSense Speck neuromorphic chip.
We show how to reduce precision discrepancies between off-chip clock-driven simulation used for training and on-chip event-driven inference.
We achieve an on-chip face detection mAP[0.5] of 0.6 while consuming only 20 mW.
arXiv Detail & Related papers (2023-12-21T19:23:02Z) - ETLP: Event-based Three-factor Local Plasticity for online learning with
neuromorphic hardware [105.54048699217668]
We show a competitive performance in accuracy with a clear advantage in the computational complexity for Event-Based Three-factor Local Plasticity (ETLP)
We also show that when using local plasticity, threshold adaptation in spiking neurons and a recurrent topology are necessary to learntemporal patterns with a rich temporal structure.
arXiv Detail & Related papers (2023-01-19T19:45:42Z) - Biologically Plausible Learning on Neuromorphic Hardware Architectures [27.138481022472]
Neuromorphic computing is an emerging paradigm that confronts this imbalance by computations directly in analog memories.
This work is the first to compare the impact of different learning algorithms on Compute-In-Memory-based hardware and vice versa.
arXiv Detail & Related papers (2022-12-29T15:10:59Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Inference with Artificial Neural Networks on Analog Neuromorphic
Hardware [0.0]
BrainScaleS-2 ASIC comprises mixed-signal neurons and synapse circuits.
System can also operate in a vector-matrix multiplication and accumulation mode for artificial neural networks.
arXiv Detail & Related papers (2020-06-23T17:25:06Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware [0.0]
We use the methodology of converting pre-trained non-spiking to spiking neural networks to evaluate the performance loss and measure the energy-per-inference.
We demonstrate that the conversion loss is usually below one percent for digital implementations, and moderately higher for analog systems with the benefit of much lower energy-per-inference costs.
arXiv Detail & Related papers (2020-04-03T16:25:49Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.