Automated differential equation solver based on the parametric
approximation optimization
- URL: http://arxiv.org/abs/2205.05383v1
- Date: Wed, 11 May 2022 10:06:47 GMT
- Title: Automated differential equation solver based on the parametric
approximation optimization
- Authors: Alexander Hvatov and Tatiana Tikhonova
- Abstract summary: The article presents a method that uses an optimization algorithm to obtain a solution using the parameterized approximation.
It allows solving the wide class of equations in an automated manner without the algorithm's parameters change.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The numerical methods for differential equation solution allow obtaining a
discrete field that converges towards the solution if the method is applied to
the correct problem. Nevertheless, the numerical methods have the restricted
class of the equations, on which the convergence with a given parameter set or
range is proved. Only a few "cheap and dirty" numerical methods converge on a
wide class of equations without parameter tuning with the lower approximation
order price. The article presents a method that uses an optimization algorithm
to obtain a solution using the parameterized approximation. The result may not
be as precise as an expert one. However, it allows solving the wide class of
equations in an automated manner without the algorithm's parameters change.
Related papers
- Annealing-based approach to solving partial differential equations [0.0]
The proposed algorithm allows the computation of eigenvectors at arbitrary precision without increasing the number of variables using an Ising machine.
Simple examples solved using this method and theoretical analysis provide a guideline for appropriate parameter settings.
arXiv Detail & Related papers (2024-06-25T08:30:00Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Towards a machine learning pipeline in reduced order modelling for
inverse problems: neural networks for boundary parametrization,
dimensionality reduction and solution manifold approximation [0.0]
Inverse problems, especially in a partial differential equation context, require a huge computational load.
We apply a numerical pipeline that involves artificial neural networks to parametrize the boundary conditions of the problem in hand.
It derives a general framework capable to provide an ad-hoc parametrization of the inlet boundary and quickly converges to the optimal solution.
arXiv Detail & Related papers (2022-10-26T14:53:07Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - A Globally Convergent Gradient-based Bilevel Hyperparameter Optimization
Method [0.0]
We propose a gradient-based bilevel method for solving the hyperparameter optimization problem.
We show that the proposed method converges with lower computation and leads to models that generalize better on the testing set.
arXiv Detail & Related papers (2022-08-25T14:25:16Z) - Numerical Solution of Stiff Ordinary Differential Equations with Random
Projection Neural Networks [0.0]
We propose a numerical scheme based on Random Projection Neural Networks (RPNN) for the solution of Ordinary Differential Equations (ODEs)
We show that our proposed scheme yields good numerical approximation accuracy without being affected by the stiffness, thus outperforming in same cases the textttode45 and textttode15s functions.
arXiv Detail & Related papers (2021-08-03T15:49:17Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Sparse Approximate Solutions to Max-Plus Equations with Application to
Multivariate Convex Regression [34.99564569478268]
We show how one can obtain such solutions efficiently and in minimum time for any $ell_p$ approximation error.
We propose a novel method for piecewise fitting of convex functions, with optimality guarantees and an approximately sparse affine regions.
arXiv Detail & Related papers (2020-11-06T15:17:00Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.