Spiking neurons as predictive controllers of linear systems
- URL: http://arxiv.org/abs/2507.16495v1
- Date: Tue, 22 Jul 2025 11:50:11 GMT
- Title: Spiking neurons as predictive controllers of linear systems
- Authors: Paolo Agliati, André Urbano, Pablo Lanillos, Nasir Ahmad, Marcel van Gerven, Sander Keemink,
- Abstract summary: Current spiking control relies on filtering the spike signal to approximate analog control.<n>Here, we provide a scalable method for task-specific spiking control with sparse neural activity.<n>We show that for physically constrained systems, predictive control is required, and the control signal ends up exploiting the passive dynamics of the downstream system to reach a target.
- Score: 1.773217790947073
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neurons communicate with downstream systems via sparse and incredibly brief electrical pulses, or spikes. Using these events, they control various targets such as neuromuscular units, neurosecretory systems, and other neurons in connected circuits. This gave rise to the idea of spiking neurons as controllers, in which spikes are the control signal. Using instantaneous events directly as the control inputs, also called `impulse control', is challenging as it does not scale well to larger networks and has low analytical tractability. Therefore, current spiking control usually relies on filtering the spike signal to approximate analog control. This ultimately means spiking neural networks (SNNs) have to output a continuous control signal, necessitating continuous energy input into downstream systems. Here, we circumvent the need for rate-based representations, providing a scalable method for task-specific spiking control with sparse neural activity. In doing so, we take inspiration from both optimal control and neuroscience theory, and define a spiking rule where spikes are only emitted if they bring a dynamical system closer to a target. From this principle, we derive the required connectivity for an SNN, and show that it can successfully control linear systems. We show that for physically constrained systems, predictive control is required, and the control signal ends up exploiting the passive dynamics of the downstream system to reach a target. Finally, we show that the control method scales to both high-dimensional networks and systems. Importantly, in all cases, we maintain a closed-form mathematical derivation of the network connectivity, the network dynamics and the control objective. This work advances the understanding of SNNs as biologically-inspired controllers, providing insight into how real neurons could exert control, and enabling applications in neuromorphic hardware design.
Related papers
- Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - Neural Control: Concurrent System Identification and Control Learning with Neural ODE [13.727727205587804]
We propose a neural ODE based method for controlling unknown dynamical systems, denoted as Neural Control (NC)
Our model concurrently learns system dynamics as well as optimal controls that guides towards target states.
Our experiments demonstrate the effectiveness of our model for learning optimal control of unknown dynamical systems.
arXiv Detail & Related papers (2024-01-03T17:05:17Z) - Rational Neural Network Controllers [0.0]
Recent work has demonstrated the effectiveness of neural networks in control systems (known as neural feedback loops)
One of the big challenges of this approach is that neural networks have been shown to be sensitive to adversarial attacks.
This paper considers rational neural networks and presents novel rational activation functions, which can be used effectively in robustness problems for neural feedback loops.
arXiv Detail & Related papers (2023-07-12T16:35:41Z) - Closed-form control with spike coding networks [1.1470070927586016]
Efficient and robust control using spiking neural networks (SNNs) is still an open problem.
We extend neuroscience theory of Spike Coding Networks (SCNs) by incorporating closed-form optimal estimation and control.
We demonstrate robust spiking control of simulated spring-mass-damper and cart-pole systems.
arXiv Detail & Related papers (2022-12-25T10:32:20Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Neural optimal feedback control with local learning rules [67.5926699124528]
A major problem in motor control is understanding how the brain plans and executes proper movements in the face of delayed and noisy stimuli.
We introduce a novel online algorithm which combines adaptive Kalman filtering with a model free control approach.
arXiv Detail & Related papers (2021-11-12T20:02:00Z) - Online-Learning Deep Neuro-Adaptive Dynamic Inversion Controller for
Model Free Control [1.3764085113103217]
A neuro-adaptive controller is implemented featuring a deep neural network trained on a new weight update law.
The controller is able to learn the nonlinear plant quickly and displays good performance in the tracking control problem.
arXiv Detail & Related papers (2021-07-21T22:46:03Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Decentralized Control with Graph Neural Networks [147.84766857793247]
We propose a novel framework using graph neural networks (GNNs) to learn decentralized controllers.
GNNs are well-suited for the task since they are naturally distributed architectures and exhibit good scalability and transferability properties.
The problems of flocking and multi-agent path planning are explored to illustrate the potential of GNNs in learning decentralized controllers.
arXiv Detail & Related papers (2020-12-29T18:59:14Z) - Deep Reinforcement Learning for Neural Control [4.822598110892847]
We present a novel methodology for control of neural circuits based on deep reinforcement learning.
We map neural circuits and their connectome into a grid-world like setting and infers the actions needed to achieve aimed behavior.
Our framework successfully infers neuropeptidic currents and synaptic architectures for control of chemotaxis.
arXiv Detail & Related papers (2020-06-12T17:41:12Z) - Graph Neural Networks for Decentralized Controllers [171.6642679604005]
Dynamical systems comprised of autonomous agents arise in many relevant problems such as robotics, smart grids, or smart cities.
Optimal centralized controllers are readily available but face limitations in terms of scalability and practical implementation.
We propose a framework using graph neural networks (GNNs) to learn decentralized controllers from data.
arXiv Detail & Related papers (2020-03-23T13:51:18Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.