LyZNet: A Lightweight Python Tool for Learning and Verifying Neural Lyapunov Functions and Regions of Attraction
- URL: http://arxiv.org/abs/2403.10013v1
- Date: Fri, 15 Mar 2024 04:35:56 GMT
- Title: LyZNet: A Lightweight Python Tool for Learning and Verifying Neural Lyapunov Functions and Regions of Attraction
- Authors: Jun Liu, Yiming Meng, Maxwell Fitzsimmons, Ruikun Zhou,
- Abstract summary: We describe a Python framework that provides integrated learning and verification of neural Lyapunov functions for stability analysis.
The proposed tool, named LyZNet, learns neural-trivial functions using physics-informed neural networks (PINNs) to solve them.
The tool also offers automatic decomposition coupled nonlinear systems into a network low subsystems for compositional verification.
- Score: 4.2162963332651575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we describe a lightweight Python framework that provides integrated learning and verification of neural Lyapunov functions for stability analysis. The proposed tool, named LyZNet, learns neural Lyapunov functions using physics-informed neural networks (PINNs) to solve Zubov's equation and verifies them using satisfiability modulo theories (SMT) solvers. What distinguishes this tool from others in the literature is its ability to provide verified regions of attraction close to the domain of attraction. This is achieved by encoding Zubov's partial differential equation (PDE) into the PINN approach. By embracing the non-convex nature of the underlying optimization problems, we demonstrate that in cases where convex optimization, such as semidefinite programming, fails to capture the domain of attraction, our neural network framework proves more successful. The tool also offers automatic decomposition of coupled nonlinear systems into a network of low-dimensional subsystems for compositional verification. We illustrate the tool's usage and effectiveness with several numerical examples, including both non-trivial low-dimensional nonlinear systems and high-dimensional systems. The repository of the tool can be found at https://git.uwaterloo.ca/hybrid-systems-lab/lyznet.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Physics-Informed Neural Network Lyapunov Functions: PDE
Characterization, Learning, and Verification [4.606000847428821]
We show that using the Zubov equation in training neural Lyapunov functions can lead to approximate regions of attraction close to the true domain of attraction.
We then provide sufficient conditions for the learned neural Lyapunov functions that can be readily verified by satisfiability modulo theories.
arXiv Detail & Related papers (2023-12-14T17:01:58Z) - Solving Elliptic Optimal Control Problems via Neural Networks and Optimality System [3.8704302640118864]
We investigate a neural network based solver for optimal control problems (without / with box constraint)
It employs deep neural networks to represent the solutions to the reduced system.
We present several numerical examples to illustrate the method and compare it with two existing ones.
arXiv Detail & Related papers (2023-08-23T05:18:19Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - Level set learning with pseudo-reversible neural networks for nonlinear
dimension reduction in function approximation [8.28646586439284]
We propose a new method of Dimension Reduction via Learning Level Sets (DRiLLS) for function approximation.
Our method contains two major components: one is the pseudo-reversible neural network (PRNN) module that effectively transforms high-dimensional input variables to low-dimensional active variables.
The PRNN not only relaxes the invertibility constraint of the nonlinear transformation present in the NLL method due to the use of RevNet, but also adaptively weights the influence of each sample and controls the sensitivity the function to the learned active variables.
arXiv Detail & Related papers (2021-12-02T17:25:34Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - CDiNN -Convex Difference Neural Networks [0.8122270502556374]
Neural networks with ReLU activation function have been shown to be universal function approximators learn function mapping as non-smooth functions.
New neural network architecture called ICNNs learn the output as a convex input.
arXiv Detail & Related papers (2021-03-31T17:31:16Z) - Neural Network Approximations of Compositional Functions With
Applications to Dynamical Systems [3.660098145214465]
We develop an approximation theory for compositional functions and their neural network approximations.
We identify a set of key features of compositional functions and the relationship between the features and the complexity of neural networks.
In addition to function approximations, we prove several formulae of error upper bounds for neural networks.
arXiv Detail & Related papers (2020-12-03T04:40:25Z) - Formal Synthesis of Lyapunov Neural Networks [61.79595926825511]
We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
arXiv Detail & Related papers (2020-03-19T17:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.