Analytical Lyapunov Function Discovery: An RL-based Generative Approach
- URL: http://arxiv.org/abs/2502.02014v2
- Date: Tue, 11 Feb 2025 00:19:47 GMT
- Title: Analytical Lyapunov Function Discovery: An RL-based Generative Approach
- Authors: Haohan Zou, Jie Feng, Hao Zhao, Yuanyuan Shi,
- Abstract summary: We propose an end-to-end framework using transformers to construct analytical Lyapunov functions (local)
Our framework consists of a transformer-based trainer that generates candidate Lyapunov functions and a falsifier that verifies candidate expressions.
We show that our approach can discover Lyapunov functions not previously identified in the control literature.
- Score: 6.752429418580116
- License:
- Abstract: Despite advances in learning-based methods, finding valid Lyapunov functions for nonlinear dynamical systems remains challenging. Current neural network approaches face two main issues: challenges in scalable verification and limited interpretability. To address these, we propose an end-to-end framework using transformers to construct analytical Lyapunov functions (local), which simplifies formal verification, enhances interpretability, and provides valuable insights for control engineers. Our framework consists of a transformer-based trainer that generates candidate Lyapunov functions and a falsifier that verifies candidate expressions and refines the model via risk-seeking policy gradient. Unlike Alfarano et al. (2024), which utilizes pre-training and seeks global Lyapunov functions for low-dimensional systems, our model is trained from scratch via reinforcement learning (RL) and succeeds in finding local Lyapunov functions for high-dimensional and non-polynomial systems. Given the analytical nature of the candidates, we employ efficient optimization methods for falsification during training and formal verification tools for the final verification. We demonstrate the efficiency of our approach on a range of nonlinear dynamical systems with up to ten dimensions and show that it can discover Lyapunov functions not previously identified in the control literature.
Related papers
- LeARN: Learnable and Adaptive Representations for Nonlinear Dynamics in System Identification [0.0]
We introduce a nonlinear system identification framework called LeARN.
It transcends the need for prior domain knowledge by learning the library of basis functions directly from data.
We validate our framework on the Neural Fly dataset, showcasing its robust adaptation and capabilities.
arXiv Detail & Related papers (2024-12-16T18:03:23Z) - Formally Verified Physics-Informed Neural Control Lyapunov Functions [4.2162963332651575]
Control Lyapunov functions are a central tool in the design and analysis of stabilizing controllers for nonlinear systems.
In this paper, we investigate physics-informed learning and formal verification of neural network control Lyapunov functions.
arXiv Detail & Related papers (2024-09-30T17:27:56Z) - Learning and Verifying Maximal Taylor-Neural Lyapunov functions [0.4910937238451484]
We introduce a novel neural network architecture, termed Taylor-neural Lyapunov functions.
This architecture encodes local approximations and extends them globally by leveraging neural networks to approximate the residuals.
This work represents a significant advancement in control theory, with broad potential applications in the design of stable control systems and beyond.
arXiv Detail & Related papers (2024-08-30T12:40:12Z) - Directed Exploration in Reinforcement Learning from Linear Temporal Logic [59.707408697394534]
Linear temporal logic (LTL) is a powerful language for task specification in reinforcement learning.
We show that the synthesized reward signal remains fundamentally sparse, making exploration challenging.
We show how better exploration can be achieved by further leveraging the specification and casting its corresponding Limit Deterministic B"uchi Automaton (LDBA) as a Markov reward process.
arXiv Detail & Related papers (2024-08-18T14:25:44Z) - Decomposing Control Lyapunov Functions for Efficient Reinforcement Learning [10.117626902557927]
Current Reinforcement Learning (RL) methods require large amounts of data to learn a specific task, leading to unreasonable costs when deploying the agent to collect data in real-world applications.
In this paper, we build from existing work that reshapes the reward function in RL by introducing a Control Lyapunov Function (CLF) to reduce the sample complexity.
We show that our method finds a policy to successfully land a quadcopter in less than half the amount of real-world data required by the state-of-the-art Soft-Actor Critic algorithm.
arXiv Detail & Related papers (2024-03-18T19:51:17Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Automated and Sound Synthesis of Lyapunov Functions with SMT Solvers [70.70479436076238]
We synthesise Lyapunov functions for linear, non-linear (polynomial) and parametric models.
We exploit an inductive framework to synthesise Lyapunov functions, starting from parametric templates.
arXiv Detail & Related papers (2020-07-21T14:45:23Z) - Formal Synthesis of Lyapunov Neural Networks [61.79595926825511]
We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
arXiv Detail & Related papers (2020-03-19T17:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.