Accelerating numerical methods by gradient-based meta-solving
- URL: http://arxiv.org/abs/2206.08594v1
- Date: Fri, 17 Jun 2022 07:31:18 GMT
- Title: Accelerating numerical methods by gradient-based meta-solving
- Authors: Sohei Arisaka, Qianxiao Li
- Abstract summary: In science and engineering applications, it is often required to solve similar computational problems repeatedly.
We propose a gradient-based algorithm to solve them in a unified way.
We demonstrate the performance and versatility of our method through theoretical analysis and numerical experiments.
- Score: 15.90188271828615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In science and engineering applications, it is often required to solve
similar computational problems repeatedly. In such cases, we can utilize the
data from previously solved problem instances to improve the efficiency of
finding subsequent solutions. This offers a unique opportunity to combine
machine learning (in particular, meta-learning) and scientific computing. To
date, a variety of such domain-specific methods have been proposed in the
literature, but a generic approach for designing these methods remains
under-explored. In this paper, we tackle this issue by formulating a general
framework to describe these problems, and propose a gradient-based algorithm to
solve them in a unified way. As an illustration of this approach, we study the
adaptive generation of parameters for iterative solvers to accelerate the
solution of differential equations. We demonstrate the performance and
versatility of our method through theoretical analysis and numerical
experiments, including applications to incompressible flow simulations and an
inverse problem of parameter estimation.
Related papers
- Differentiable Programming for Differential Equations: A Review [36.67198631261628]
Differentiable programming is a cornerstone of modern scientific computing.
Differentiating functions based on the numerical solution of differential equations is non-trivial.
We provide a comprehensive review of existing techniques to compute derivatives of numerical solutions of differential equations.
arXiv Detail & Related papers (2024-06-14T03:54:25Z) - Reverse em-problem based on Bregman divergence and its application to classical and quantum information theory [53.64687146666141]
Recent paper introduced an analytical method for calculating the channel capacity without the need for iteration.
We turn our attention to the reverse em-problem, proposed by Toyota.
We derive a non-iterative formula for the reverse em-problem.
arXiv Detail & Related papers (2024-03-14T10:20:28Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - A Block-Coordinate Approach of Multi-level Optimization with an
Application to Physics-Informed Neural Networks [0.0]
We propose a multi-level algorithm for the solution of nonlinear optimization problems and analyze its evaluation complexity.
We apply it to the solution of partial differential equations using physics-informed neural networks (PINNs) and show on a few test problems that the approach results in better solutions and significant computational savings.
arXiv Detail & Related papers (2023-05-23T19:12:02Z) - SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
Cocoercive Variational Inequalities [137.6408511310322]
We consider the problem of finite-sum cocoercive variational inequalities.
For strongly monotone problems it is possible to achieve linear convergence to a solution using this method.
arXiv Detail & Related papers (2022-10-12T08:04:48Z) - A Deep Gradient Correction Method for Iteratively Solving Linear Systems [5.744903762364991]
We present a novel approach to approximate the solution of large, sparse, symmetric, positive-definite linear systems of equations.
Our algorithm is capable of reducing the linear system residual to a given tolerance in a small number of iterations.
arXiv Detail & Related papers (2022-05-22T06:40:38Z) - Automated differential equation solver based on the parametric
approximation optimization [77.34726150561087]
The article presents a method that uses an optimization algorithm to obtain a solution using the parameterized approximation.
It allows solving the wide class of equations in an automated manner without the algorithm's parameters change.
arXiv Detail & Related papers (2022-05-11T10:06:47Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Inverse Reinforcement Learning with Explicit Policy Estimates [19.159290496678004]
Various methods for solving the inverse reinforcement learning problem have been developed independently in machine learning and economics.
We show that they all belong to a class of optimization problems, characterized by a common form of gradient, the associated policy and the objective.
Using insights which emerge from our study of this class of optimization problems, we identify various problem scenarios and investigate each method's suitability for these problems.
arXiv Detail & Related papers (2021-03-04T07:00:58Z) - Consistency analysis of bilevel data-driven learning in inverse problems [1.0705399532413618]
We consider the adaptive learning of the regularization parameter from data by means of optimization.
We demonstrate how to implement our framework on linear inverse problems.
Online numerical schemes are derived using the gradient descent method.
arXiv Detail & Related papers (2020-07-06T12:23:29Z) - GACEM: Generalized Autoregressive Cross Entropy Method for Multi-Modal
Black Box Constraint Satisfaction [69.94831587339539]
We present a modified Cross-Entropy Method (CEM) that uses a masked auto-regressive neural network for modeling uniform distributions over the solution space.
Our algorithm is able to express complicated solution spaces, thus allowing it to track a variety of different solution regions.
arXiv Detail & Related papers (2020-02-17T20:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.