Learning Over Contracting and Lipschitz Closed-Loops for
Partially-Observed Nonlinear Systems (Extended Version)
- URL: http://arxiv.org/abs/2304.06193v2
- Date: Mon, 13 Nov 2023 23:44:19 GMT
- Title: Learning Over Contracting and Lipschitz Closed-Loops for
Partially-Observed Nonlinear Systems (Extended Version)
- Authors: Nicholas H. Barbara, Ruigang Wang, Ian R. Manchester
- Abstract summary: This paper presents a policy parameterization for learning-based control on nonlinear, partially-observed dynamical systems.
We prove that the resulting Youla-REN parameterization automatically satisfies stability (contraction) and user-tunable robustness (Lipschitz) conditions.
We find that the Youla-REN performs similarly to existing learning-based and optimal control methods while also ensuring stability and exhibiting improved robustness to adversarial disturbances.
- Score: 1.2430809884830318
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a policy parameterization for learning-based control on
nonlinear, partially-observed dynamical systems. The parameterization is based
on a nonlinear version of the Youla parameterization and the recently proposed
Recurrent Equilibrium Network (REN) class of models. We prove that the
resulting Youla-REN parameterization automatically satisfies stability
(contraction) and user-tunable robustness (Lipschitz) conditions on the
closed-loop system. This means it can be used for safe learning-based control
with no additional constraints or projections required to enforce stability or
robustness. We test the new policy class in simulation on two reinforcement
learning tasks: 1) magnetic suspension, and 2) inverting a rotary-arm pendulum.
We find that the Youla-REN performs similarly to existing learning-based and
optimal control methods while also ensuring stability and exhibiting improved
robustness to adversarial disturbances.
Related papers
- Learning to Boost the Performance of Stable Nonlinear Systems [0.0]
We tackle the performance-boosting problem with closed-loop stability guarantees.
Our methods enable learning over arbitrarily deep neural network classes of performance-boosting controllers for stable nonlinear systems.
arXiv Detail & Related papers (2024-05-01T21:11:29Z) - Data-Driven Control with Inherent Lyapunov Stability [3.695480271934742]
We propose Control with Inherent Lyapunov Stability (CoILS) as a method for jointly learning parametric representations of a nonlinear dynamics model and a stabilizing controller from data.
In addition to the stabilizability of the learned dynamics guaranteed by our novel construction, we show that the learned controller stabilizes the true dynamics under certain assumptions on the fidelity of the learned dynamics.
arXiv Detail & Related papers (2023-03-06T14:21:42Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Learning over All Stabilizing Nonlinear Controllers for a
Partially-Observed Linear System [4.3012765978447565]
We propose a parameterization of nonlinear output feedback controllers for linear dynamical systems.
Our approach guarantees the closed-loop stability of partially observable linear dynamical systems without requiring any constraints to be satisfied.
arXiv Detail & Related papers (2021-12-08T10:43:47Z) - Youla-REN: Learning Nonlinear Feedback Policies with Robust Stability
Guarantees [5.71097144710995]
This paper presents a parameterization of nonlinear controllers for uncertain systems building on a recently developed neural network architecture.
The proposed framework has "built-in" guarantees of stability, i.e., all policies in the search space result in a contracting (globally exponentially stable) closed-loop system.
arXiv Detail & Related papers (2021-12-02T13:52:37Z) - Concurrent Learning Based Tracking Control of Nonlinear Systems using
Gaussian Process [2.7930955543692817]
This paper demonstrates the applicability of the combination of concurrent learning as a tool for parameter estimation and non-parametric Gaussian Process for online disturbance learning.
A control law is developed by using both techniques sequentially in the context of feedback linearization.
The closed-loop system stability for the nth-order system is proven using the Lyapunov stability theorem.
arXiv Detail & Related papers (2021-06-02T02:59:48Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Anticipating the Long-Term Effect of Online Learning in Control [75.6527644813815]
AntLer is a design algorithm for learning-based control laws that anticipates learning.
We show that AntLer approximates an optimal solution arbitrarily accurately with probability one.
arXiv Detail & Related papers (2020-07-24T07:00:14Z) - Learning Stabilizing Controllers for Unstable Linear Quadratic
Regulators from a Single Trajectory [85.29718245299341]
We study linear controllers under quadratic costs model also known as linear quadratic regulators (LQR)
We present two different semi-definite programs (SDP) which results in a controller that stabilizes all systems within an ellipsoid uncertainty set.
We propose an efficient data dependent algorithm -- textsceXploration -- that with high probability quickly identifies a stabilizing controller.
arXiv Detail & Related papers (2020-06-19T08:58:57Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.