Event-based Backpropagation for Analog Neuromorphic Hardware
- URL: http://arxiv.org/abs/2302.07141v1
- Date: Mon, 13 Feb 2023 18:55:59 GMT
- Title: Event-based Backpropagation for Analog Neuromorphic Hardware
- Authors: Christian Pehle and Luca Blessing and Elias Arnold and Eric M\"uller
and Johannes Schemmel
- Abstract summary: We present our progress implementing the EventProp algorithm using the example of the BrainScaleS-2 analog neuromorphic hardware.
We present the theoretical framework for estimating gradients and results verifying the correctness of the estimation.
It suggests the feasibility of a full on-device implementation of the algorithm that would enable scalable, energy-efficient, event-based learning in large-scale analog neuromorphic hardware.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neuromorphic computing aims to incorporate lessons from studying biological
nervous systems in the design of computer architectures. While existing
approaches have successfully implemented aspects of those computational
principles, such as sparse spike-based computation, event-based scalable
learning has remained an elusive goal in large-scale systems. However, only
then the potential energy-efficiency advantages of neuromorphic systems
relative to other hardware architectures can be realized during learning. We
present our progress implementing the EventProp algorithm using the example of
the BrainScaleS-2 analog neuromorphic hardware. Previous gradient-based
approaches to learning used "surrogate gradients" and dense sampling of
observables or were limited by assumptions on the underlying dynamics and loss
functions. In contrast, our approach only needs spike time observations from
the system while being able to incorporate other system observables, such as
membrane voltage measurements, in a principled way. This leads to a
one-order-of-magnitude improvement in the information efficiency of the
gradient estimate, which would directly translate to corresponding energy
efficiency improvements in an optimized hardware implementation. We present the
theoretical framework for estimating gradients and results verifying the
correctness of the estimation, as well as results on a low-dimensional
classification task using the BrainScaleS-2 system. Building on this work has
the potential to enable scalable gradient estimation in large-scale
neuromorphic hardware as a continuous measurement of the system state would be
prohibitive and energy-inefficient in such instances. It also suggests the
feasibility of a full on-device implementation of the algorithm that would
enable scalable, energy-efficient, event-based learning in large-scale analog
neuromorphic hardware.
Related papers
- Meta-Learning for Physically-Constrained Neural System Identification [9.417562391585076]
We present a gradient-based meta-learning framework for rapid adaptation of neural state-space models (NSSMs) for black-box system identification.
We show that the meta-learned models result in improved downstream performance in model-based state estimation in indoor localization and energy systems.
arXiv Detail & Related papers (2025-01-10T18:46:28Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Center-Sensitive Kernel Optimization for Efficient On-Device Incremental Learning [88.78080749909665]
Current on-device training methods just focus on efficient training without considering the catastrophic forgetting.
This paper proposes a simple but effective edge-friendly incremental learning framework.
Our method achieves average accuracy boost of 38.08% with even less memory and approximate computation.
arXiv Detail & Related papers (2024-06-13T05:49:29Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
We present Layer-wise Feedback Propagation (LFP), a novel training principle for neural network-like predictors.
LFP decomposes a reward to individual neurons based on their respective contributions to solving a given task.
Our method then implements a greedy approach reinforcing helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Biologically Plausible Learning on Neuromorphic Hardware Architectures [27.138481022472]
Neuromorphic computing is an emerging paradigm that confronts this imbalance by computations directly in analog memories.
This work is the first to compare the impact of different learning algorithms on Compute-In-Memory-based hardware and vice versa.
arXiv Detail & Related papers (2022-12-29T15:10:59Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Gradient descent in materia through homodyne gradient extraction [2.012950941269354]
We demonstrate a simple yet efficient gradient extraction method, based on the principle of homodyne detection.
By perturbing the parameters that need to be optimized we effectively obtain the gradient information in a highly robust and scalable manner.
Homodyne gradient extraction can in principle be fully implemented in materia, facilitating the development of autonomously learning material systems.
arXiv Detail & Related papers (2021-05-15T12:18:31Z) - A deep learning theory for neural networks grounded in physics [2.132096006921048]
We argue that building large, fast and efficient neural networks on neuromorphic architectures requires rethinking the algorithms to implement and train them.
Our framework applies to a very broad class of models, namely systems whose state or dynamics are described by variational equations.
arXiv Detail & Related papers (2021-03-18T02:12:48Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.