Formal Synthesis of Lyapunov Neural Networks
- URL: http://arxiv.org/abs/2003.08910v2
- Date: Wed, 24 Jun 2020 16:33:17 GMT
- Title: Formal Synthesis of Lyapunov Neural Networks
- Authors: Alessandro Abate, Daniele Ahmed, Mirco Giacobbe, and Andrea Peruffo
- Abstract summary: We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
- Score: 61.79595926825511
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an automatic and formally sound method for synthesising Lyapunov
functions for the asymptotic stability of autonomous non-linear systems.
Traditional methods are either analytical and require manual effort or are
numerical but lack of formal soundness. Symbolic computational methods for
Lyapunov functions, which are in between, give formal guarantees but are
typically semi-automatic because they rely on the user to provide appropriate
function templates. We propose a method that finds Lyapunov functions fully
automatically$-$using machine learning$-$while also providing formal
guarantees$-$using satisfiability modulo theories (SMT). We employ a
counterexample-guided approach where a numerical learner and a symbolic
verifier interact to construct provably correct Lyapunov neural networks
(LNNs). The learner trains a neural network that satisfies the Lyapunov
criteria for asymptotic stability over a samples set; the verifier proves via
SMT solving that the criteria are satisfied over the whole domain or augments
the samples set with counterexamples. Our method supports neural networks with
polynomial activation functions and multiple depth and width, which display
wide learning capabilities. We demonstrate our method over several non-trivial
benchmarks and compare it favourably against a numerical optimisation-based
approach, a symbolic template-based approach, and a cognate LNN-based approach.
Our method synthesises Lyapunov functions faster and over wider spatial domains
than the alternatives, yet providing stronger or equal guarantees.
Related papers
- The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - GloptiNets: Scalable Non-Convex Optimization with Certificates [61.50835040805378]
We present a novel approach to non-cube optimization with certificates, which handles smooth functions on the hypercube or on the torus.
By exploiting the regularity of the target function intrinsic in the decay of its spectrum, we allow at the same time to obtain precise certificates and leverage the advanced and powerful neural networks.
arXiv Detail & Related papers (2023-06-26T09:42:59Z) - Gauss-Newton Temporal Difference Learning with Nonlinear Function Approximation [11.925232472331494]
A Gauss-Newton Temporal Difference (GNTD) learning method is proposed to solve the Q-learning problem with nonlinear function approximation.
In each iteration, our method takes one Gauss-Newton (GN) step to optimize a variant of Mean-Squared Bellman Error (MSBE)
We validate our method via extensive experiments in several RL benchmarks, where GNTD exhibits both higher rewards and faster convergence than TD-type methods.
arXiv Detail & Related papers (2023-02-25T14:14:01Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z) - Automated and Sound Synthesis of Lyapunov Functions with SMT Solvers [70.70479436076238]
We synthesise Lyapunov functions for linear, non-linear (polynomial) and parametric models.
We exploit an inductive framework to synthesise Lyapunov functions, starting from parametric templates.
arXiv Detail & Related papers (2020-07-21T14:45:23Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Stable Neural Flows [15.318500611972441]
We introduce a provably stable variant of neural ordinary differential equations (neural ODEs) whose trajectories evolve on an energy functional parametrised by a neural network.
The learning procedure is cast as an optimal control problem, and an approximate solution is proposed based on adjoint sensivity analysis.
arXiv Detail & Related papers (2020-03-18T06:27:21Z) - CounterExample Guided Neural Synthesis [12.742347465894586]
Program synthesis is difficult and methods that provide formal guarantees suffer from scalability issues.
We combine neural networks with formal reasoning to convert a logical specification into a sequence of examples that guides the neural network towards a correct solution.
Our results show that the formal reasoning based guidance improves the performance of the neural network substantially.
arXiv Detail & Related papers (2020-01-25T01:11:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.