Global Lyapunov functions: a long-standing open problem in mathematics, with symbolic transformers
- URL: http://arxiv.org/abs/2410.08304v1
- Date: Thu, 10 Oct 2024 18:50:10 GMT
- Title: Global Lyapunov functions: a long-standing open problem in mathematics, with symbolic transformers
- Authors: Alberto Alfarano, François Charton, Amaury Hayat,
- Abstract summary: We consider a long-standing open problem in mathematics: discovering a Lyapunov function that ensures the global stability of a system.
This problem has no known general solution, only exist for some small algorithmic systems.
We propose a new method for generating synthetic training samples from solutions, and show that sequence-to-sequence transformers perform better than solvers and humans on systems.
- Score: 7.953947994427209
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite their spectacular progress, language models still struggle on complex reasoning tasks, such as advanced mathematics. We consider a long-standing open problem in mathematics: discovering a Lyapunov function that ensures the global stability of a dynamical system. This problem has no known general solution, and algorithmic solvers only exist for some small polynomial systems. We propose a new method for generating synthetic training samples from random solutions, and show that sequence-to-sequence transformers trained on such datasets perform better than algorithmic solvers and humans on polynomial systems, and can discover new Lyapunov functions for non-polynomial systems.
Related papers
- MultiSTOP: Solving Functional Equations with Reinforcement Learning [56.073581097785016]
We develop MultiSTOP, a Reinforcement Learning framework for solving functional equations in physics.
This new methodology produces actual numerical solutions instead of bounds on them.
arXiv Detail & Related papers (2024-04-23T10:51:31Z) - The complexity of solving a random polynomial system [3.420117005350141]
In this paper there is an overview on a general algorithm used to solve a multivariate system.
We then speak about random systems and in particular what "random" means to us.
We give an upper bound on both the degree of regularity and the solving degree of such random systems.
arXiv Detail & Related papers (2023-09-07T17:14:59Z) - Automatic Solver Generator for Systems of Laurent Polynomial Equations [1.7188280334580197]
Given a family of triangulation (Laurent) systems with the same monomial structure but varying coefficients, find a solver that computes solutions for any family as fast as possible.
We propose an automatic solver generator for systems of Laurent equations.
arXiv Detail & Related papers (2023-07-01T12:12:52Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Sparse Polynomial Optimization: Theory and Practice [5.27013884159732]
Book presents several efforts to tackle this challenge with important scientific implications.
It provides alternative optimization schemes that scale well in terms of computational complexity.
We present sparsity-exploiting hierarchies of relaxations, for either unconstrained or constrained problems.
arXiv Detail & Related papers (2022-08-23T18:56:05Z) - The Dynamics of Riemannian Robbins-Monro Algorithms [101.29301565229265]
We propose a family of Riemannian algorithms generalizing and extending the seminal approximation framework of Robbins and Monro.
Compared to their Euclidean counterparts, Riemannian algorithms are much less understood due to lack of a global linear structure on the manifold.
We provide a general template of almost sure convergence results that mirrors and extends the existing theory for Euclidean Robbins-Monro schemes.
arXiv Detail & Related papers (2022-06-14T12:30:11Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Symbolic Regression using Mixed-Integer Nonlinear Optimization [9.638685454900047]
The Symbolic Regression (SR) problem is a hard problem in machine learning.
We propose a hybrid algorithm that combines mixed-integer nonlinear optimization with explicit enumeration.
We show that our algorithm is competitive, for some synthetic data sets, with a state-of-the-art SR software and a recent physics-inspired method called AI Feynman.
arXiv Detail & Related papers (2020-06-11T20:53:17Z) - Learning selection strategies in Buchberger's algorithm [3.4376560669160394]
We introduce a new approach to Buchberger's algorithm that uses reinforcement learning agents to perform S-pair selection.
We then study how the difficulty of the problem depends on the choices of domain and distribution of S-pairs.
We train a policy model using policy optimization (PPO) to learn S-pair selection strategies for random systems of proximal binomial equations.
arXiv Detail & Related papers (2020-05-05T02:27:00Z) - Formal Synthesis of Lyapunov Neural Networks [61.79595926825511]
We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
arXiv Detail & Related papers (2020-03-19T17:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.