A feedback control optimizer for online and hardware-aware training of Spiking Neural Networks
- URL: http://arxiv.org/abs/2602.13261v1
- Date: Tue, 03 Feb 2026 10:08:21 GMT
- Title: A feedback control optimizer for online and hardware-aware training of Spiking Neural Networks
- Authors: Matteo Saponati, Chiara De Luca, Giacomo Indiveri, Benjamin Grewe,
- Abstract summary: We present a novel learning algorithm for Spiking Neural Networks (SNNs) on mixed-signal devices that integrates spike-based weight updates with feedback control signals.<n>In our framework, a spiking controller generates feedback signals to guide SNN activity and drive weight updates, enabling scalable and local on-chip learning.
- Score: 0.5512626046125358
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unlike traditional artificial neural networks (ANNs), biological neuronal networks solve complex cognitive tasks with sparse neuronal activity, recurrent connections, and local learning rules. These mechanisms serve as design principles in Neuromorphic computing, which addresses the critical challenge of energy consumption in modern computing. However, most mixed-signal neuromorphic devices rely on semi- or unsupervised learning rules, which are ineffective for optimizing hardware in supervised learning tasks. This lack of scalable solutions for on-chip learning restricts the potential of mixed-signal devices to enable sustainable, intelligent edge systems. To address these challenges, we present a novel learning algorithm for Spiking Neural Networks (SNNs) on mixed-signal devices that integrates spike-based weight updates with feedback control signals. In our framework, a spiking controller generates feedback signals to guide SNN activity and drive weight updates, enabling scalable and local on-chip learning. We first evaluate the algorithm on various classification tasks, demonstrating that single-layer SNNs trained with feedback control achieve performance comparable to artificial neural networks (ANNs). We then assess its implementation on mixed-signal neuromorphic devices by testing network performance in continuous online learning scenarios and evaluating resilience to hyperparameter mismatches. Our results show that the feedback control optimizer is compatible with neuromorphic applications, advancing the potential for scalable, on-chip learning solutions in edge applications.
Related papers
- Adaptively Pruned Spiking Neural Networks for Energy-Efficient Intracortical Neural Decoding [0.06181089784338582]
Spiking Neural Networks (SNNs) on neuromorphic hardware have demonstrated remarkable efficiency in neural decoding.<n>We introduce a novel adaptive pruning algorithm specifically designed for SNNs with high activation sparsity, targeting intracortical neural decoding.
arXiv Detail & Related papers (2025-04-15T19:16:34Z) - Peer-to-Peer Learning Dynamics of Wide Neural Networks [10.179711440042123]
We provide an explicit characterization of the learning dynamics of wide neural networks trained using distributed gradient descent (DGD) algorithms.<n>Our results leverage both recent in neural tangent kernel (NTK) theory and extensive previous work on distributed learning and consensus tasks.
arXiv Detail & Related papers (2024-09-23T17:57:58Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Expressivity of Neural Networks with Random Weights and Learned Biases [44.02417750529102]
We show that feedforward neural networks with fixed random weights can approximate any continuous function on compact sets.<n>Our findings are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights, as well as for AI.
arXiv Detail & Related papers (2024-07-01T04:25:49Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural
Networks [3.7384509727711923]
A major challenge for neuromorphic computing is that learning algorithms for traditional artificial neural networks (ANNs) do not transfer directly to spiking neural networks (SNNs)
In this article, we focus on the self-supervised learning problem of optical flow estimation from event-based camera inputs.
We show that the performance of the proposed ANNs and SNNs are on par with that of the current state-of-the-art ANNs trained in a self-supervised manner.
arXiv Detail & Related papers (2021-06-03T14:03:41Z) - Learning in Deep Neural Networks Using a Biologically Inspired Optimizer [5.144809478361604]
We propose a novel biologically inspired for artificial (ANNs) and spiking neural networks (SNNs)
GRAPES implements a weight-distribution dependent modulation of the error signal at each node of the neural network.
We show that this biologically inspired mechanism leads to a systematic improvement of the convergence rate of the network, and substantially improves classification accuracy of ANNs and SNNs.
arXiv Detail & Related papers (2021-04-23T13:50:30Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.