Lyapunov-Net: A Deep Neural Network Architecture for Lyapunov Function
Approximation
- URL: http://arxiv.org/abs/2109.13359v1
- Date: Mon, 27 Sep 2021 21:42:19 GMT
- Title: Lyapunov-Net: A Deep Neural Network Architecture for Lyapunov Function
Approximation
- Authors: Nathan Gaby and Fumin Zhang and Xiaojing Ye
- Abstract summary: We develop a versatile deep neural network architecture, called Lyapunov-Net, to approximate Lyapunov functions in high dimensions.
Lyapunov-Net guarantees positive definiteness, and thus it can be easily trained to satisfy the negative orbital derivative condition.
- Score: 7.469944784454579
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop a versatile deep neural network architecture, called Lyapunov-Net,
to approximate Lyapunov functions of dynamical systems in high dimensions.
Lyapunov-Net guarantees positive definiteness, and thus it can be easily
trained to satisfy the negative orbital derivative condition, which only
renders a single term in the empirical risk function in practice. This
significantly reduces the number of hyper-parameters compared to existing
methods. We also provide theoretical justifications on the approximation power
of Lyapunov-Net and its complexity bounds. We demonstrate the efficiency of the
proposed method on nonlinear dynamical systems involving up to 30-dimensional
state spaces, and show that the proposed approach significantly outperforms the
state-of-the-art methods.
Related papers
- Learning and Verifying Maximal Taylor-Neural Lyapunov functions [0.4910937238451484]
We introduce a novel neural network architecture, termed Taylor-neural Lyapunov functions.
This architecture encodes local approximations and extends them globally by leveraging neural networks to approximate the residuals.
This work represents a significant advancement in control theory, with broad potential applications in the design of stable control systems and beyond.
arXiv Detail & Related papers (2024-08-30T12:40:12Z) - A simple algorithm for output range analysis for deep neural networks [0.0]
This paper presents a novel approach for the output range estimation problem in Deep Neural Networks (DNNs) by integrating a Simulated Annealing (SA) algorithm.
The method effectively addresses the challenges by the lack of geometric information and non-linearity inherent inResNets.
arXiv Detail & Related papers (2024-07-02T22:47:40Z) - Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy [75.15685966213832]
We analyze the rich directional structure of optimization trajectories represented by their pointwise parameters.
We show that training only scalar batchnorm parameters some while into training matches the performance of training the entire network.
arXiv Detail & Related papers (2024-03-12T07:32:47Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
We present Layer-wise Feedback Propagation (LFP), a novel training principle for neural network-like predictors.
LFP decomposes a reward to individual neurons based on their respective contributions to solving a given task.
Our method then implements a greedy approach reinforcing helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - LyaNet: A Lyapunov Framework for Training Neural ODEs [59.73633363494646]
We propose a method for training ordinary differential equations by using a control-theoretic Lyapunov condition for stability.
Our approach, called LyaNet, is based on a novel Lyapunov loss formulation that encourages the inference dynamics to converge quickly to the correct prediction.
arXiv Detail & Related papers (2022-02-05T10:13:14Z) - Efficiently Solving High-Order and Nonlinear ODEs with Rational Fraction
Polynomial: the Ratio Net [3.155317790896023]
This study takes a different approach by introducing neural network architecture for constructing trial functions, known as ratio net.
Through empirical trials, it demonstrated that the proposed method exhibits higher efficiency compared to existing approaches.
The ratio net holds promise for advancing the efficiency and effectiveness of solving differential equations.
arXiv Detail & Related papers (2021-05-18T16:59:52Z) - Formal Synthesis of Lyapunov Neural Networks [61.79595926825511]
We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
arXiv Detail & Related papers (2020-03-19T17:21:02Z) - Neural Proximal/Trust Region Policy Optimization Attains Globally
Optimal Policy [119.12515258771302]
We show that a variant of PPOO equipped with over-parametrization converges to globally optimal networks.
The key to our analysis is the iterate of infinite gradient under a notion of one-dimensional monotonicity, where the gradient and are instant by networks.
arXiv Detail & Related papers (2019-06-25T03:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.