Physics-Informed Neural Network Lyapunov Functions: PDE
Characterization, Learning, and Verification
- URL: http://arxiv.org/abs/2312.09131v3
- Date: Thu, 21 Dec 2023 18:10:28 GMT
- Title: Physics-Informed Neural Network Lyapunov Functions: PDE
Characterization, Learning, and Verification
- Authors: Jun Liu and Yiming Meng and Maxwell Fitzsimmons and Ruikun Zhou
- Abstract summary: We show that using the Zubov equation in training neural Lyapunov functions can lead to approximate regions of attraction close to the true domain of attraction.
We then provide sufficient conditions for the learned neural Lyapunov functions that can be readily verified by satisfiability modulo theories.
- Score: 4.606000847428821
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We provide a systematic investigation of using physics-informed neural
networks to compute Lyapunov functions. We encode Lyapunov conditions as a
partial differential equation (PDE) and use this for training neural network
Lyapunov functions. We analyze the analytical properties of the solutions to
the Lyapunov and Zubov PDEs. In particular, we show that employing the Zubov
equation in training neural Lyapunov functions can lead to approximate regions
of attraction close to the true domain of attraction. We also examine
approximation errors and the convergence of neural approximations to the unique
solution of Zubov's equation. We then provide sufficient conditions for the
learned neural Lyapunov functions that can be readily verified by
satisfiability modulo theories (SMT) solvers, enabling formal verification of
both local stability analysis and region-of-attraction estimates in the large.
Through a number of nonlinear examples, ranging from low to high dimensions, we
demonstrate that the proposed framework can outperform traditional
sums-of-squares (SOS) Lyapunov functions obtained using semidefinite
programming (SDP).
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - LyZNet: A Lightweight Python Tool for Learning and Verifying Neural Lyapunov Functions and Regions of Attraction [4.2162963332651575]
We describe a Python framework that provides integrated learning and verification of neural Lyapunov functions for stability analysis.
The proposed tool, named LyZNet, learns neural-trivial functions using physics-informed neural networks (PINNs) to solve them.
The tool also offers automatic decomposition coupled nonlinear systems into a network low subsystems for compositional verification.
arXiv Detail & Related papers (2024-03-15T04:35:56Z) - RBF-PINN: Non-Fourier Positional Embedding in Physics-Informed Neural Networks [1.9819034119774483]
We highlight the limitations of widely used Fourier-based feature mapping in certain situations.
We suggest the use of the conditionally positive definite Radial Basis Function.
Our method can be seamlessly integrated into coordinate-based input neural networks.
arXiv Detail & Related papers (2024-02-13T10:54:43Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Convergence analysis of unsupervised Legendre-Galerkin neural networks
for linear second-order elliptic PDEs [0.8594140167290099]
We perform the convergence analysis of unsupervised Legendre--Galerkin neural networks (ULGNet)
ULGNet is a deep-learning-based numerical method for solving partial differential equations (PDEs)
arXiv Detail & Related papers (2022-11-16T13:31:03Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - A Physics Informed Neural Network Approach to Solution and
Identification of Biharmonic Equations of Elasticity [0.0]
We explore an application of the Physics Informed Neural Networks (PINNs) in conjunction with Airy stress functions and Fourier series.
We find that enriching feature space using Airy stress functions can significantly improve the accuracy of PINN solutions for biharmonic PDEs.
arXiv Detail & Related papers (2021-08-16T17:19:50Z) - Neural Network Approximations of Compositional Functions With
Applications to Dynamical Systems [3.660098145214465]
We develop an approximation theory for compositional functions and their neural network approximations.
We identify a set of key features of compositional functions and the relationship between the features and the complexity of neural networks.
In addition to function approximations, we prove several formulae of error upper bounds for neural networks.
arXiv Detail & Related papers (2020-12-03T04:40:25Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Formal Synthesis of Lyapunov Neural Networks [61.79595926825511]
We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
arXiv Detail & Related papers (2020-03-19T17:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.