Synthesizing Neural Network Controllers with Closed-Loop Dissipativity Guarantees
- URL: http://arxiv.org/abs/2404.07373v1
- Date: Wed, 10 Apr 2024 22:15:28 GMT
- Title: Synthesizing Neural Network Controllers with Closed-Loop Dissipativity Guarantees
- Authors: Neelay Junnarkar, Murat Arcak, Peter Seiler,
- Abstract summary: A class of plants is considered that of linear time-invariant (LTI) systems interconnected with an uncertainty.
The uncertainty of the plant and the nonlinearities of the neural network are both described using integral quadratic constraints.
A convex condition is used in a projection-based training method to synthesize neural network controllers with dissipativity guarantees.
- Score: 0.6612847014373572
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, a method is presented to synthesize neural network controllers such that the feedback system of plant and controller is dissipative, certifying performance requirements such as L2 gain bounds. The class of plants considered is that of linear time-invariant (LTI) systems interconnected with an uncertainty, including nonlinearities treated as an uncertainty for convenience of analysis. The uncertainty of the plant and the nonlinearities of the neural network are both described using integral quadratic constraints (IQCs). First, a dissipativity condition is derived for uncertain LTI systems. Second, this condition is used to construct a linear matrix inequality (LMI) which can be used to synthesize neural network controllers. Finally, this convex condition is used in a projection-based training method to synthesize neural network controllers with dissipativity guarantees. Numerical examples on an inverted pendulum and a flexible rod on a cart are provided to demonstrate the effectiveness of this approach.
Related papers
- Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Sub-linear Regret in Adaptive Model Predictive Control [56.705978425244496]
We present STT-MPC (Self-Tuning Tube-based Model Predictive Control), an online oracle that combines the certainty-equivalence principle and polytopic tubes.
We analyze the regret of the algorithm, when compared to an algorithm initially aware of the system dynamics.
arXiv Detail & Related papers (2023-10-07T15:07:10Z) - Robust Recurrent Neural Network to Identify Ship Motion in Open Water
with Performance Guarantees -- Technical Report [8.441687388985162]
Recurrent neural networks are capable of learning the dynamics of an unknown nonlinear system purely from input-output measurements.
In this work, we represent a recurrent neural network as a linear time-invariant system with nonlinear disturbances.
arXiv Detail & Related papers (2022-12-12T09:07:37Z) - Synthesis of Stabilizing Recurrent Equilibrium Network Controllers [1.3799488979862031]
We propose a parameterization of a nonlinear dynamic controller based on the recurrent equilibrium network, a generalization of the recurrent neural network.
We derive constraints on the parameterization under which the controller guarantees exponential stability of a partially observed dynamical system with sector-bounded nonlinearities.
We present a method to synthesize this controller using projected policy gradient methods to maximize a reward function with arbitrary structure.
arXiv Detail & Related papers (2022-03-31T22:27:51Z) - A Theoretical Overview of Neural Contraction Metrics for Learning-based
Control with Guaranteed Stability [7.963506386866862]
This paper presents a neural network model of an optimal contraction metric and corresponding differential Lyapunov function.
Its innovation lies in providing formal robustness guarantees for learning-based control frameworks.
arXiv Detail & Related papers (2021-10-02T00:28:49Z) - Sampling asymmetric open quantum systems for artificial neural networks [77.34726150561087]
We present a hybrid sampling strategy which takes asymmetric properties explicitly into account, achieving fast convergence times and high scalability for asymmetric open systems.
We highlight the universal applicability of artificial neural networks, underlining the universal applicability of neural networks.
arXiv Detail & Related papers (2020-12-20T18:25:29Z) - Certifying Incremental Quadratic Constraints for Neural Networks via
Convex Optimization [2.388501293246858]
We propose a convex program to certify incremental quadratic constraints on the map of neural networks over a region of interest.
certificates can capture several useful properties such as (local) Lipschitz continuity, one-sided Lipschitz continuity, invertibility, and contraction.
arXiv Detail & Related papers (2020-12-10T21:15:00Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Efficient Proximal Mapping of the 1-path-norm of Shallow Networks [47.20962674178505]
We show two new important properties of the 1-path-norm neural networks.
First, despite its non-smoothness and non-accuracy it allows a closed proximal operator to be efficiently computed.
Second, when the activation functions are differentiable, it provides an upper bound on the Lipschitz constant.
arXiv Detail & Related papers (2020-07-02T10:34:06Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural
Network Controllers via Semidefinite Programming [19.51345816555571]
We propose a novel forward reachability analysis method for the safety verification of linear time-varying systems with neural networks in feedback.
We show that we can compute these approximate reachable sets using semidefinite programming.
We illustrate our method in a quadrotor example, in which we first approximate a nonlinear model predictive controller via a deep neural network and then apply our analysis tool to certify finite-time reachability and constraint satisfaction of the closed-loop system.
arXiv Detail & Related papers (2020-04-16T18:48:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.