A Theoretical Overview of Neural Contraction Metrics for Learning-based
Control with Guaranteed Stability
- URL: http://arxiv.org/abs/2110.00693v1
- Date: Sat, 2 Oct 2021 00:28:49 GMT
- Title: A Theoretical Overview of Neural Contraction Metrics for Learning-based
Control with Guaranteed Stability
- Authors: Hiroyasu Tsukamoto and Soon-Jo Chung and Jean-Jacques Slotine and
Chuchu Fan
- Abstract summary: This paper presents a neural network model of an optimal contraction metric and corresponding differential Lyapunov function.
Its innovation lies in providing formal robustness guarantees for learning-based control frameworks.
- Score: 7.963506386866862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a theoretical overview of a Neural Contraction Metric
(NCM): a neural network model of an optimal contraction metric and
corresponding differential Lyapunov function, the existence of which is a
necessary and sufficient condition for incremental exponential stability of
non-autonomous nonlinear system trajectories. Its innovation lies in providing
formal robustness guarantees for learning-based control frameworks, utilizing
contraction theory as an analytical tool to study the nonlinear stability of
learned systems via convex optimization. In particular, we rigorously show in
this paper that, by regarding modeling errors of the learning schemes as
external disturbances, the NCM control is capable of obtaining an explicit
bound on the distance between a time-varying target trajectory and perturbed
solution trajectories, which exponentially decreases with time even under the
presence of deterministic and stochastic perturbation. These useful features
permit simultaneous synthesis of a contraction metric and associated control
law by a neural network, thereby enabling real-time computable and probably
robust learning-based control for general control-affine nonlinear systems.
Related papers
- Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Synthesizing Neural Network Controllers with Closed-Loop Dissipativity Guarantees [0.6612847014373572]
A class of plants is considered that of linear time-invariant (LTI) systems interconnected with an uncertainty.
The uncertainty of the plant and the nonlinearities of the neural network are both described using integral quadratic constraints.
A convex condition is used in a projection-based training method to synthesize neural network controllers with dissipativity guarantees.
arXiv Detail & Related papers (2024-04-10T22:15:28Z) - Contraction Theory for Nonlinear Stability Analysis and Learning-based Control: A Tutorial Overview [17.05002635077646]
Contraction theory is an analytical tool to study differential dynamics of a non-autonomous (i.e., time-varying) nonlinear system.
Its nonlinear stability analysis boils down to finding a suitable contraction metric that satisfies a stability condition expressed as a linear matrix inequality.
arXiv Detail & Related papers (2021-10-01T23:03:21Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Learning-based Adaptive Control via Contraction Theory [7.918886297003018]
We present a new deep learning-based adaptive control framework for nonlinear systems with parametric uncertainty, called an adaptive Neural Contraction Metric (aNCM)
The aNCM uses a neural network model of an optimal adaptive contraction metric, the existence of which guarantees stability and exponential boundedness of system trajectories under the uncertainty.
arXiv Detail & Related papers (2021-03-04T12:19:52Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Neural Stochastic Contraction Metrics for Learning-based Control and
Estimation [13.751135823626493]
The NSCM framework allows autonomous agents to approximate optimal stable control and estimation policies in real-time.
It outperforms existing nonlinear control and estimation techniques including the state-dependent Riccati equation, iterative LQR, EKF, and the neural contraction.
arXiv Detail & Related papers (2020-11-06T03:04:42Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z) - Neural Contraction Metrics for Robust Estimation and Control: A Convex
Optimization Approach [6.646482960350819]
This paper presents a new deep learning-based framework for robust nonlinear estimation and control using the concept of a Neural Contraction Metric (NCM)
The NCM uses a deep long short-term memory recurrent neural network for a global approximation of an optimal contraction metric.
We demonstrate how to exploit NCMs to design an online optimal estimator and controller for nonlinear systems with bounded disturbances utilizing their duality.
arXiv Detail & Related papers (2020-06-08T05:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.