Neural Lyapunov Control of Unknown Nonlinear Systems with Stability
Guarantees
- URL: http://arxiv.org/abs/2206.01913v1
- Date: Sat, 4 Jun 2022 05:57:31 GMT
- Title: Neural Lyapunov Control of Unknown Nonlinear Systems with Stability
Guarantees
- Authors: Ruikun Zhou, Thanin Quartz, Hans De Sterck, Jun Liu
- Abstract summary: We propose a learning framework to stabilize an unknown nonlinear system with a neural controller and learn a neural Lyapunov function.
We provide theoretical guarantees of the proposed learning framework in terms of the closed-loop stability for the unknown nonlinear system.
- Score: 4.786698731084036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning for control of dynamical systems with formal guarantees remains a
challenging task. This paper proposes a learning framework to simultaneously
stabilize an unknown nonlinear system with a neural controller and learn a
neural Lyapunov function to certify a region of attraction (ROA) for the
closed-loop system. The algorithmic structure consists of two neural networks
and a satisfiability modulo theories (SMT) solver. The first neural network is
responsible for learning the unknown dynamics. The second neural network aims
to identify a valid Lyapunov function and a provably stabilizing nonlinear
controller. The SMT solver then verifies that the candidate Lyapunov function
indeed satisfies the Lyapunov conditions. We provide theoretical guarantees of
the proposed learning framework in terms of the closed-loop stability for the
unknown nonlinear system. We illustrate the effectiveness of the approach with
a set of numerical experiments.
Related papers
- Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Synthesizing Neural Network Controllers with Closed-Loop Dissipativity Guarantees [0.6612847014373572]
A class of plants is considered that of linear time-invariant (LTI) systems interconnected with an uncertainty.
The uncertainty of the plant and the nonlinearities of the neural network are both described using integral quadratic constraints.
A convex condition is used in a projection-based training method to synthesize neural network controllers with dissipativity guarantees.
arXiv Detail & Related papers (2024-04-10T22:15:28Z) - Neural Lyapunov Control for Discrete-Time Systems [30.135651803114307]
A general approach is to compute a combination of a Lyapunov function and an associated control policy.
Several methods have been proposed that represent Lyapunov functions using neural networks.
We propose the first approach for learning neural Lyapunov control in a broad class of discrete-time systems.
arXiv Detail & Related papers (2023-05-11T03:28:20Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Neural Koopman Lyapunov Control [0.0]
We propose a framework to identify and construct stabilizable bilinear control systems and its associated observables from data.
Our proposed approach provides provable guarantees of global stability for the nonlinear control systems with unknown dynamics.
arXiv Detail & Related papers (2022-01-13T17:38:09Z) - Stability Verification in Stochastic Control Systems via Neural Network
Supermartingales [17.558766911646263]
We present an approach for general nonlinear control problems with two novel aspects.
We use ranking supergales (RSMs) to certify a.s.asymptotic stability, and we present a method for learning neural networks.
arXiv Detail & Related papers (2021-12-17T13:05:14Z) - A Theoretical Overview of Neural Contraction Metrics for Learning-based
Control with Guaranteed Stability [7.963506386866862]
This paper presents a neural network model of an optimal contraction metric and corresponding differential Lyapunov function.
Its innovation lies in providing formal robustness guarantees for learning-based control frameworks.
arXiv Detail & Related papers (2021-10-02T00:28:49Z) - Learning the Linear Quadratic Regulator from Nonlinear Observations [135.66883119468707]
We introduce a new problem setting for continuous control called the LQR with Rich Observations, or RichLQR.
In our setting, the environment is summarized by a low-dimensional continuous latent state with linear dynamics and quadratic costs.
Our results constitute the first provable sample complexity guarantee for continuous control with an unknown nonlinearity in the system model and general function approximation.
arXiv Detail & Related papers (2020-10-08T07:02:47Z) - Formal Synthesis of Lyapunov Neural Networks [61.79595926825511]
We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
arXiv Detail & Related papers (2020-03-19T17:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.