Distributed neural network control with dependability guarantees: a
compositional port-Hamiltonian approach
- URL: http://arxiv.org/abs/2112.09046v1
- Date: Thu, 16 Dec 2021 17:37:11 GMT
- Title: Distributed neural network control with dependability guarantees: a
compositional port-Hamiltonian approach
- Authors: Luca Furieri, Clara Luc\'ia Galimberti, Muhammad Zakwan, Giancarlo
Ferrari-Trecate
- Abstract summary: Large-scale cyber-physical systems require that control policies are distributed, that is, that they only rely on local real-time measurements and communication with neighboring agents.
Recent work has proposed training Neural Network (NN) distributed controllers.
A main challenge of NN controllers is that they are not dependable during and after training, that is, the closed-loop system may be unstable, and the training may fail due to vanishing and exploding gradients.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale cyber-physical systems require that control policies are
distributed, that is, that they only rely on local real-time measurements and
communication with neighboring agents. Optimal Distributed Control (ODC)
problems are, however, highly intractable even in seemingly simple cases.
Recent work has thus proposed training Neural Network (NN) distributed
controllers. A main challenge of NN controllers is that they are not dependable
during and after training, that is, the closed-loop system may be unstable, and
the training may fail due to vanishing and exploding gradients. In this paper,
we address these issues for networks of nonlinear port-Hamiltonian (pH)
systems, whose modeling power ranges from energy systems to non-holonomic
vehicles and chemical reactions. Specifically, we embrace the compositional
properties of pH systems to characterize deep Hamiltonian control policies with
built-in closed-loop stability guarantees, irrespective of the interconnection
topology and the chosen NN parameters. Furthermore, our setup enables
leveraging recent results on well-behaved neural ODEs to prevent the phenomenon
of vanishing gradients by design. Numerical experiments corroborate the
dependability of the proposed architecture, while matching the performance of
general neural network policies.
Related papers
- Building Hybrid B-Spline And Neural Network Operators [0.0]
Control systems are indispensable for ensuring the safety of cyber-physical systems (CPS)
We propose a novel strategy that combines the inductive bias of B-splines with data-driven neural networks to facilitate real-time predictions of CPS behavior.
arXiv Detail & Related papers (2024-06-06T21:54:59Z) - Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Parameter-Adaptive Approximate MPC: Tuning Neural-Network Controllers without Retraining [50.00291020618743]
This work introduces a novel, parameter-adaptive AMPC architecture capable of online tuning without recomputing large datasets and retraining.
We showcase the effectiveness of parameter-adaptive AMPC by controlling the swing-ups of two different real cartpole systems with a severely resource-constrained microcontroller (MCU)
Taken together, these contributions represent a marked step toward the practical application of AMPC in real-world systems.
arXiv Detail & Related papers (2024-04-08T20:02:19Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Safety Filter Design for Neural Network Systems via Convex Optimization [35.87465363928146]
We propose a novel safety filter that relies on convex optimization to ensure safety for a neural network (NN) system.
We demonstrate the efficacy of the proposed framework numerically on a nonlinear pendulum system.
arXiv Detail & Related papers (2023-08-16T01:30:13Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Closed-form control with spike coding networks [1.1470070927586016]
Efficient and robust control using spiking neural networks (SNNs) is still an open problem.
We extend neuroscience theory of Spike Coding Networks (SCNs) by incorporating closed-form optimal estimation and control.
We demonstrate robust spiking control of simulated spring-mass-damper and cart-pole systems.
arXiv Detail & Related papers (2022-12-25T10:32:20Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Neural Network Optimal Feedback Control with Guaranteed Local Stability [2.8725913509167156]
We show that some neural network (NN) controllers with high test accuracy can fail to even locally stabilize the dynamic system.
We propose several novel NN architectures, which we show guarantee local stability while retaining the semi-global approximation capacity to learn the optimal feedback policy.
arXiv Detail & Related papers (2022-05-01T04:23:24Z) - Neural network optimal feedback control with enhanced closed loop
stability [3.0981875303080795]
Recent research has shown that supervised learning can be an effective tool for designing optimal feedback controllers for high-dimensional nonlinear dynamic systems.
But the behavior of these neural network (NN) controllers is still not well understood.
In this paper we use numerical simulations to demonstrate that typical test accuracy metrics do not effectively capture the ability of an NN controller to stabilize a system.
arXiv Detail & Related papers (2021-09-15T17:59:20Z) - Decentralized Control with Graph Neural Networks [147.84766857793247]
We propose a novel framework using graph neural networks (GNNs) to learn decentralized controllers.
GNNs are well-suited for the task since they are naturally distributed architectures and exhibit good scalability and transferability properties.
The problems of flocking and multi-agent path planning are explored to illustrate the potential of GNNs in learning decentralized controllers.
arXiv Detail & Related papers (2020-12-29T18:59:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.