Combining Neural Networks and Symbolic Regression for Analytical Lyapunov Function Discovery
- URL: http://arxiv.org/abs/2406.15675v3
- Date: Fri, 12 Jul 2024 20:08:46 GMT
- Title: Combining Neural Networks and Symbolic Regression for Analytical Lyapunov Function Discovery
- Authors: Jie Feng, Haohan Zou, Yuanyuan Shi,
- Abstract summary: We propose CoNSAL (Combining Neural networks and regression for Analytical Lyapunov function) to construct analytical Lyapunov functions for nonlinear dynamic systems.
This framework contains a neural Lyapunov function and a symbolic regression component, where symbolic regression is applied to distill the neural network to precise analytical forms.
- Score: 3.803654983282309
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose CoNSAL (Combining Neural networks and Symbolic regression for Analytical Lyapunov function) to construct analytical Lyapunov functions for nonlinear dynamic systems. This framework contains a neural Lyapunov function and a symbolic regression component, where symbolic regression is applied to distill the neural network to precise analytical forms. Our approach utilizes symbolic regression not only as a tool for translation but also as a means to uncover counterexamples. This procedure terminates when no counterexamples are found in the analytical formulation. Compared with previous results, CoNSAL directly produces an analytical form of the Lyapunov function with improved interpretability in both the learning process and the final results. We apply CoNSAL to 2-D inverted pendulum, path following, Van Der Pol Oscillator, 3-D trig dynamics, 4-D rotating wheel pendulum, 6-D 3-bus power system, and demonstrate that our algorithm successfully finds their valid Lyapunov functions. Code examples are available at https://github.com/HaohanZou/CoNSAL.
Related papers
- The Prevalence of Neural Collapse in Neural Multivariate Regression [3.691119072844077]
We show that neural networks exhibit Neural Collapse (NC) during the final stage of training for the classification problem.
To our knowledge, this is the first empirical and theoretical study of neural collapse in the context of regression.
arXiv Detail & Related papers (2024-09-06T10:45:58Z) - Physics-Informed Neural Network Lyapunov Functions: PDE
Characterization, Learning, and Verification [4.606000847428821]
We show that using the Zubov equation in training neural Lyapunov functions can lead to approximate regions of attraction close to the true domain of attraction.
We then provide sufficient conditions for the learned neural Lyapunov functions that can be readily verified by satisfiability modulo theories.
arXiv Detail & Related papers (2023-12-14T17:01:58Z) - Approximation of Nonlinear Functionals Using Deep ReLU Networks [7.876115370275732]
We investigate the approximation power of functional deep neural networks associated with the rectified linear unit (ReLU) activation function.
In addition, we establish rates of approximation of the proposed functional deep ReLU networks under mild regularity conditions.
arXiv Detail & Related papers (2023-04-10T08:10:11Z) - Nonparametric regression with modified ReLU networks [77.34726150561087]
We consider regression estimation with modified ReLU neural networks in which network weight matrices are first modified by a function $alpha$ before being multiplied by input vectors.
arXiv Detail & Related papers (2022-07-17T21:46:06Z) - Level set learning with pseudo-reversible neural networks for nonlinear
dimension reduction in function approximation [8.28646586439284]
We propose a new method of Dimension Reduction via Learning Level Sets (DRiLLS) for function approximation.
Our method contains two major components: one is the pseudo-reversible neural network (PRNN) module that effectively transforms high-dimensional input variables to low-dimensional active variables.
The PRNN not only relaxes the invertibility constraint of the nonlinear transformation present in the NLL method due to the use of RevNet, but also adaptively weights the influence of each sample and controls the sensitivity the function to the learned active variables.
arXiv Detail & Related papers (2021-12-02T17:25:34Z) - Going Beyond Linear RL: Sample Efficient Neural Function Approximation [76.57464214864756]
We study function approximation with two-layer neural networks.
Our results significantly improve upon what can be attained with linear (or eluder dimension) methods.
arXiv Detail & Related papers (2021-07-14T03:03:56Z) - Automated and Sound Synthesis of Lyapunov Functions with SMT Solvers [70.70479436076238]
We synthesise Lyapunov functions for linear, non-linear (polynomial) and parametric models.
We exploit an inductive framework to synthesise Lyapunov functions, starting from parametric templates.
arXiv Detail & Related papers (2020-07-21T14:45:23Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - Measuring Model Complexity of Neural Networks with Curve Activation
Functions [100.98319505253797]
We propose the linear approximation neural network (LANN) to approximate a given deep model with curve activation function.
We experimentally explore the training process of neural networks and detect overfitting.
We find that the $L1$ and $L2$ regularizations suppress the increase of model complexity.
arXiv Detail & Related papers (2020-06-16T07:38:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.