Formally Verified Physics-Informed Neural Control Lyapunov Functions
- URL: http://arxiv.org/abs/2409.20528v1
- Date: Mon, 30 Sep 2024 17:27:56 GMT
- Title: Formally Verified Physics-Informed Neural Control Lyapunov Functions
- Authors: Jun Liu, Maxwell Fitzsimmons, Ruikun Zhou, Yiming Meng,
- Abstract summary: Control Lyapunov functions are a central tool in the design and analysis of stabilizing controllers for nonlinear systems.
In this paper, we investigate physics-informed learning and formal verification of neural network control Lyapunov functions.
- Score: 4.2162963332651575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Control Lyapunov functions are a central tool in the design and analysis of stabilizing controllers for nonlinear systems. Constructing such functions, however, remains a significant challenge. In this paper, we investigate physics-informed learning and formal verification of neural network control Lyapunov functions. These neural networks solve a transformed Hamilton-Jacobi-Bellman equation, augmented by data generated using Pontryagin's maximum principle. Similar to how Zubov's equation characterizes the domain of attraction for autonomous systems, this equation characterizes the null-controllability set of a controlled system. This principled learning of neural network control Lyapunov functions outperforms alternative approaches, such as sum-of-squares and rational control Lyapunov functions, as demonstrated by numerical examples. As an intermediate step, we also present results on the formal verification of quadratic control Lyapunov functions, which, aided by satisfiability modulo theories solvers, can perform surprisingly well compared to more sophisticated approaches and efficiently produce global certificates of null-controllability.
Related papers
- Learning and Verifying Maximal Taylor-Neural Lyapunov functions [0.4910937238451484]
We introduce a novel neural network architecture, termed Taylor-neural Lyapunov functions.
This architecture encodes local approximations and extends them globally by leveraging neural networks to approximate the residuals.
This work represents a significant advancement in control theory, with broad potential applications in the design of stable control systems and beyond.
arXiv Detail & Related papers (2024-08-30T12:40:12Z) - Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Learning Stabilizable Deep Dynamics Models [1.75320459412718]
We propose a new method for learning the dynamics of input-affine control systems.
An important feature is that a stabilizing controller and control Lyapunov function of the learned model are obtained as well.
The proposed method can also be applied to solving Hamilton-Jacobi inequalities.
arXiv Detail & Related papers (2022-03-18T03:09:24Z) - Total Energy Shaping with Neural Interconnection and Damping Assignment
-- Passivity Based Control [2.1485350418225244]
We exploit the universal approximation property of Neural Networks (NNs) to design passivity-based control schemes.
The proposed control design methodology is validated for mechanical systems of one and two degrees-of-freedom.
arXiv Detail & Related papers (2021-12-24T08:41:17Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - A Theoretical Overview of Neural Contraction Metrics for Learning-based
Control with Guaranteed Stability [7.963506386866862]
This paper presents a neural network model of an optimal contraction metric and corresponding differential Lyapunov function.
Its innovation lies in providing formal robustness guarantees for learning-based control frameworks.
arXiv Detail & Related papers (2021-10-02T00:28:49Z) - Lyapunov-Regularized Reinforcement Learning for Power System Transient
Stability [5.634825161148484]
This paper proposes a Lyapunov regularized RL approach for optimal frequency control for transient stability in lossy networks.
Case study shows that introducing the Lyapunov regularization enables the controller to be stabilizing and achieve smaller losses.
arXiv Detail & Related papers (2021-03-05T18:55:26Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Automated and Sound Synthesis of Lyapunov Functions with SMT Solvers [70.70479436076238]
We synthesise Lyapunov functions for linear, non-linear (polynomial) and parametric models.
We exploit an inductive framework to synthesise Lyapunov functions, starting from parametric templates.
arXiv Detail & Related papers (2020-07-21T14:45:23Z) - Formal Synthesis of Lyapunov Neural Networks [61.79595926825511]
We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
arXiv Detail & Related papers (2020-03-19T17:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.