Stability Verification in Stochastic Control Systems via Neural Network
Supermartingales
- URL: http://arxiv.org/abs/2112.09495v1
- Date: Fri, 17 Dec 2021 13:05:14 GMT
- Title: Stability Verification in Stochastic Control Systems via Neural Network
Supermartingales
- Authors: Mathias Lechner, {\DJ}or{\dj}e \v{Z}ikeli\'c, Krishnendu Chatterjee,
Thomas A. Henzinger
- Abstract summary: We present an approach for general nonlinear control problems with two novel aspects.
We use ranking supergales (RSMs) to certify a.s.asymptotic stability, and we present a method for learning neural networks.
- Score: 17.558766911646263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of formally verifying almost-sure (a.s.) asymptotic
stability in discrete-time nonlinear stochastic control systems. While
verifying stability in deterministic control systems is extensively studied in
the literature, verifying stability in stochastic control systems is an open
problem. The few existing works on this topic either consider only specialized
forms of stochasticity or make restrictive assumptions on the system, rendering
them inapplicable to learning algorithms with neural network policies. In this
work, we present an approach for general nonlinear stochastic control problems
with two novel aspects: (a) instead of classical stochastic extensions of
Lyapunov functions, we use ranking supermartingales (RSMs) to certify
a.s.~asymptotic stability, and (b) we present a method for learning neural
network RSMs. We prove that our approach guarantees a.s.~asymptotic stability
of the system and provides the first method to obtain bounds on the
stabilization time, which stochastic Lyapunov functions do not. Finally, we
validate our approach experimentally on a set of nonlinear stochastic
reinforcement learning environments with neural network policies.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Stochastic Reinforcement Learning with Stability Guarantees for Control of Unknown Nonlinear Systems [6.571209126567701]
We propose a reinforcement learning algorithm that stabilizes the system by learning a local linear representation ofthe dynamics.
We demonstrate the effectiveness of our algorithm on several challenging high-dimensional dynamical systems.
arXiv Detail & Related papers (2024-09-12T20:07:54Z) - Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Neural Lyapunov Control of Unknown Nonlinear Systems with Stability
Guarantees [4.786698731084036]
We propose a learning framework to stabilize an unknown nonlinear system with a neural controller and learn a neural Lyapunov function.
We provide theoretical guarantees of the proposed learning framework in terms of the closed-loop stability for the unknown nonlinear system.
arXiv Detail & Related papers (2022-06-04T05:57:31Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Bayesian Algorithms Learn to Stabilize Unknown Continuous-Time Systems [0.0]
Linear dynamical systems are canonical models for learning-based control of plants with uncertain dynamics.
A reliable stabilization procedure for this purpose that can effectively learn from unstable data to stabilize the system in a finite time is not currently available.
In this work, we propose a novel learning algorithm that stabilizes unknown continuous-time linear systems.
arXiv Detail & Related papers (2021-12-30T15:31:35Z) - Robust Stability of Neural-Network Controlled Nonlinear Systems with
Parametric Variability [2.0199917525888895]
We develop a theory for stability and stabilizability of a class of neural-network controlled nonlinear systems.
For computing such a robust stabilizing NN controller, a stability guaranteed training (SGT) is also proposed.
arXiv Detail & Related papers (2021-09-13T05:09:30Z) - Recurrent Neural Network Controllers Synthesis with Stability Guarantees
for Partially Observed Systems [6.234005265019845]
We consider the important class of recurrent neural networks (RNN) as dynamic controllers for nonlinear uncertain partially-observed systems.
We propose a projected policy gradient method that iteratively enforces the stability conditions in the reparametrized space.
Numerical experiments show that our method learns stabilizing controllers while using fewer samples and achieving higher final performance compared with policy gradient.
arXiv Detail & Related papers (2021-09-08T18:21:56Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.