Automatic differentiation of Sylvester, Lyapunov, and algebraic Riccati
equations
- URL: http://arxiv.org/abs/2011.11430v2
- Date: Tue, 24 Nov 2020 10:53:43 GMT
- Title: Automatic differentiation of Sylvester, Lyapunov, and algebraic Riccati
equations
- Authors: Ta-Chu Kao and Guillaume Hennequin
- Abstract summary: These equations are used to compute infinite-horizon Gramians, solve optimal control problems in continuous or discrete time, and design observers.
We derive the forward and reverse-mode derivatives of the solutions to all three types of equations, and showcase their application on an inverse control problem.
- Score: 5.584060970507506
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sylvester, Lyapunov, and algebraic Riccati equations are the bread and butter
of control theorists. They are used to compute infinite-horizon Gramians, solve
optimal control problems in continuous or discrete time, and design observers.
While popular numerical computing frameworks (e.g., scipy) provide efficient
solvers for these equations, these solvers are still largely missing from most
automatic differentiation libraries. Here, we derive the forward and
reverse-mode derivatives of the solutions to all three types of equations, and
showcase their application on an inverse control problem.
Related papers
- Explicit Solution Equation for Every Combinatorial Problem via Tensor Networks: MeLoCoToN [55.2480439325792]
We show that every computation problem has an exact explicit equation that returns its solution.
We present a method to obtain an equation that solves exactly any problem, both inversion, constraint satisfaction and optimization.
arXiv Detail & Related papers (2025-02-09T18:16:53Z) - Gaussian Process Regression for Inverse Problems in Linear PDEs [7.793266750812356]
This paper introduces a computationally efficient algorithm in system theory for solving inverse problems governed by linear partial differential equations (PDEs)
An example application includes identifying the wave speed from noisy data for classical wave equations, which are widely used in physics.
arXiv Detail & Related papers (2025-02-06T18:20:38Z) - KANtrol: A Physics-Informed Kolmogorov-Arnold Network Framework for Solving Multi-Dimensional and Fractional Optimal Control Problems [0.0]
We introduce the KANtrol framework to solve optimal control problems involving continuous time variables.
We show how automatic differentiation is utilized to compute exact derivatives for integer-order dynamics.
We tackle multi-dimensional problems, including the optimal control of a 2D partial heat differential equation.
arXiv Detail & Related papers (2024-09-10T17:12:37Z) - A Physics-Informed Machine Learning Approach for Solving Distributed Order Fractional Differential Equations [0.0]
This paper introduces a novel methodology for solving distributed-order fractional differential equations using a physics-informed machine learning framework.
By embedding the distributed-order functional equation into the SVR framework, we incorporate physical laws directly into the learning process.
The effectiveness of the proposed approach is validated through a series of numerical experiments on Caputo-based distributed-order fractional differential equations.
arXiv Detail & Related papers (2024-09-05T13:20:10Z) - MultiSTOP: Solving Functional Equations with Reinforcement Learning [56.073581097785016]
We develop MultiSTOP, a Reinforcement Learning framework for solving functional equations in physics.
This new methodology produces actual numerical solutions instead of bounds on them.
arXiv Detail & Related papers (2024-04-23T10:51:31Z) - Neural Time-Reversed Generalized Riccati Equation [60.92253836775246]
Hamiltonian equations offer an interpretation of optimality through auxiliary variables known as costates.
This paper introduces a novel neural-based approach to optimal control, with the aim of working forward-in-time.
arXiv Detail & Related papers (2023-12-14T19:29:37Z) - CoLA: Exploiting Compositional Structure for Automatic and Efficient
Numerical Linear Algebra [62.37017125812101]
We propose a simple but general framework for large-scale linear algebra problems in machine learning, named CoLA.
By combining a linear operator abstraction with compositional dispatch rules, CoLA automatically constructs memory and runtime efficient numerical algorithms.
We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.
arXiv Detail & Related papers (2023-09-06T14:59:38Z) - Quantum algorithm for time-dependent differential equations using Dyson series [0.0]
We provide a quantum algorithm for solving time-dependent linear differential equations with logarithmic dependence of the complexity on the error and derivative.
Our method is to encode the Dyson series in a system of linear equations, then solve via the optimal quantum linear equation solver.
arXiv Detail & Related papers (2022-12-07T09:50:40Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Seeking Diverse Reasoning Logic: Controlled Equation Expression
Generation for Solving Math Word Problems [21.62131402402428]
We propose a controlled equation generation solver by leveraging a set of control codes to guide the model.
Our method universally improves the performance on single-unknown (Math23K) and multiple-unknown (DRAW1K, HMWP) benchmarks.
arXiv Detail & Related papers (2022-09-21T12:43:30Z) - Competitive Mirror Descent [67.31015611281225]
Constrained competitive optimization involves multiple agents trying to minimize conflicting objectives, subject to constraints.
We propose competitive mirror descent (CMD): a general method for solving such problems based on first order information.
As a special case we obtain a novel competitive multiplicative weights algorithm for problems on the positive cone.
arXiv Detail & Related papers (2020-06-17T22:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.