Data-Driven Control with Inherent Lyapunov Stability
- URL: http://arxiv.org/abs/2303.03157v2
- Date: Tue, 4 Apr 2023 06:49:50 GMT
- Title: Data-Driven Control with Inherent Lyapunov Stability
- Authors: Youngjae Min, Spencer M. Richards, Navid Azizan
- Abstract summary: We propose Control with Inherent Lyapunov Stability (CoILS) as a method for jointly learning parametric representations of a nonlinear dynamics model and a stabilizing controller from data.
In addition to the stabilizability of the learned dynamics guaranteed by our novel construction, we show that the learned controller stabilizes the true dynamics under certain assumptions on the fidelity of the learned dynamics.
- Score: 3.695480271934742
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in learning-based control leverage deep function
approximators, such as neural networks, to model the evolution of controlled
dynamical systems over time. However, the problem of learning a dynamics model
and a stabilizing controller persists, since the synthesis of a stabilizing
feedback law for known nonlinear systems is a difficult task, let alone for
complex parametric representations that must be fit to data. To this end, we
propose Control with Inherent Lyapunov Stability (CoILS), a method for jointly
learning parametric representations of a nonlinear dynamics model and a
stabilizing controller from data. To do this, our approach simultaneously
learns a parametric Lyapunov function which intrinsically constrains the
dynamics model to be stabilizable by the learned controller. In addition to the
stabilizability of the learned dynamics guaranteed by our novel construction,
we show that the learned controller stabilizes the true dynamics under certain
assumptions on the fidelity of the learned dynamics. Finally, we demonstrate
the efficacy of CoILS on a variety of simulated nonlinear dynamical systems.
Related papers
- Learning Over Contracting and Lipschitz Closed-Loops for
Partially-Observed Nonlinear Systems (Extended Version) [1.2430809884830318]
This paper presents a policy parameterization for learning-based control on nonlinear, partially-observed dynamical systems.
We prove that the resulting Youla-REN parameterization automatically satisfies stability (contraction) and user-tunable robustness (Lipschitz) conditions.
We find that the Youla-REN performs similarly to existing learning-based and optimal control methods while also ensuring stability and exhibiting improved robustness to adversarial disturbances.
arXiv Detail & Related papers (2023-04-12T23:55:56Z) - Learning Control-Oriented Dynamical Structure from Data [25.316358215670274]
We discuss a state-dependent nonlinear tracking controller formulation for general nonlinear control-affine systems.
We empirically demonstrate the efficacy of learned versions of this controller in stable trajectory tracking.
arXiv Detail & Related papers (2023-02-06T02:01:38Z) - Active Learning of Discrete-Time Dynamics for Uncertainty-Aware Model Predictive Control [46.81433026280051]
We present a self-supervised learning approach that actively models the dynamics of nonlinear robotic systems.
Our approach showcases high resilience and generalization capabilities by consistently adapting to unseen flight conditions.
arXiv Detail & Related papers (2022-10-23T00:45:05Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Learning Stabilizable Deep Dynamics Models [1.75320459412718]
We propose a new method for learning the dynamics of input-affine control systems.
An important feature is that a stabilizing controller and control Lyapunov function of the learned model are obtained as well.
The proposed method can also be applied to solving Hamilton-Jacobi inequalities.
arXiv Detail & Related papers (2022-03-18T03:09:24Z) - Recurrent Neural Network Controllers Synthesis with Stability Guarantees
for Partially Observed Systems [6.234005265019845]
We consider the important class of recurrent neural networks (RNN) as dynamic controllers for nonlinear uncertain partially-observed systems.
We propose a projected policy gradient method that iteratively enforces the stability conditions in the reparametrized space.
Numerical experiments show that our method learns stabilizing controllers while using fewer samples and achieving higher final performance compared with policy gradient.
arXiv Detail & Related papers (2021-09-08T18:21:56Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - The Impact of Data on the Stability of Learning-Based Control- Extended
Version [63.97366815968177]
We propose a Lyapunov-based measure for quantifying the impact of data on the certifiable control performance.
By modeling unknown system dynamics through Gaussian processes, we can determine the interrelation between model uncertainty and satisfaction of stability conditions.
arXiv Detail & Related papers (2020-11-20T19:10:01Z) - Neural Identification for Control [30.91037635723668]
The proposed method relies on the Lyapunov stability theory to generate a stable closed-loop dynamics hypothesis and corresponding control law.
We demonstrate our method on various nonlinear control problems such as n-link pendulum balancing and trajectory tracking, pendulum on cart balancing, and wheeled vehicle path following.
arXiv Detail & Related papers (2020-09-24T16:17:44Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.