Automated differential equation solver based on the parametric
approximation optimization
- URL: http://arxiv.org/abs/2205.05383v1
- Date: Wed, 11 May 2022 10:06:47 GMT
- Title: Automated differential equation solver based on the parametric
approximation optimization
- Authors: Alexander Hvatov and Tatiana Tikhonova
- Abstract summary: The article presents a method that uses an optimization algorithm to obtain a solution using the parameterized approximation.
It allows solving the wide class of equations in an automated manner without the algorithm's parameters change.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The numerical methods for differential equation solution allow obtaining a
discrete field that converges towards the solution if the method is applied to
the correct problem. Nevertheless, the numerical methods have the restricted
class of the equations, on which the convergence with a given parameter set or
range is proved. Only a few "cheap and dirty" numerical methods converge on a
wide class of equations without parameter tuning with the lower approximation
order price. The article presents a method that uses an optimization algorithm
to obtain a solution using the parameterized approximation. The result may not
be as precise as an expert one. However, it allows solving the wide class of
equations in an automated manner without the algorithm's parameters change.
Related papers
- Estimating unknown parameters in differential equations with a reinforcement learning based PSO method [2.9808905403445145]
This paper reformulates the parameter estimation problem of differential equations as an optimization problem by introducing the concept of particles.
Building on reinforcement learning-based particle swarm optimization (RLLPSO), this paper proposes a novel method, DERLPSO, for estimating unknown parameters of differential equations.
The experimental results demonstrate that our DERLPSO consistently outperforms other methods in terms of performance, achieving an average Mean Square Error of 1.13e-05.
arXiv Detail & Related papers (2024-11-13T14:40:51Z) - HOUND: High-Order Universal Numerical Differentiator for a Parameter-free Polynomial Online Approximation [0.0]
This paper introduces a numerical differentiator, represented as a system of nonlinear differential equations of any high order.
We demonstrate that, with a suitable choice of differentiator order, the error converges to zero for signals with additive white noise.
A notable advantage of this numerical differentiation is that it does not require tuning parameters based on the specific characteristics of the signal being differentiated.
arXiv Detail & Related papers (2024-10-18T13:42:01Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Symbolic Recovery of Differential Equations: The Identifiability Problem [52.158782751264205]
Symbolic recovery of differential equations is the ambitious attempt at automating the derivation of governing equations.
We provide both necessary and sufficient conditions for a function to uniquely determine the corresponding differential equation.
We then use our results to devise numerical algorithms aiming to determine whether a function solves a differential equation uniquely.
arXiv Detail & Related papers (2022-10-15T17:32:49Z) - Numerical Solution of Stiff Ordinary Differential Equations with Random
Projection Neural Networks [0.0]
We propose a numerical scheme based on Random Projection Neural Networks (RPNN) for the solution of Ordinary Differential Equations (ODEs)
We show that our proposed scheme yields good numerical approximation accuracy without being affected by the stiffness, thus outperforming in same cases the textttode45 and textttode15s functions.
arXiv Detail & Related papers (2021-08-03T15:49:17Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Sparse Approximate Solutions to Max-Plus Equations with Application to
Multivariate Convex Regression [34.99564569478268]
We show how one can obtain such solutions efficiently and in minimum time for any $ell_p$ approximation error.
We propose a novel method for piecewise fitting of convex functions, with optimality guarantees and an approximately sparse affine regions.
arXiv Detail & Related papers (2020-11-06T15:17:00Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.