Personalized Algorithm Generation: A Case Study in Meta-Learning ODE
Integrators
- URL: http://arxiv.org/abs/2105.01303v1
- Date: Tue, 4 May 2021 05:42:33 GMT
- Title: Personalized Algorithm Generation: A Case Study in Meta-Learning ODE
Integrators
- Authors: Yue Guo, Felix Dietrich, Tom Bertalan, Danimir T. Doncevic, Manuel
Dahmen, Ioannis G. Kevrekidis, Qianxiao Li
- Abstract summary: We study the meta-learning of numerical algorithms for scientific computing.
We develop a machine learning approach that automatically learns solvers for initial value problems.
- Score: 6.457555233038933
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the meta-learning of numerical algorithms for scientific computing,
which combines the mathematically driven, handcrafted design of general
algorithm structure with a data-driven adaptation to specific classes of tasks.
This represents a departure from the classical approaches in numerical
analysis, which typically do not feature such learning-based adaptations. As a
case study, we develop a machine learning approach that automatically learns
effective solvers for initial value problems in the form of ordinary
differential equations (ODEs), based on the Runge-Kutta (RK) integrator
architecture. By combining neural network approximations and meta-learning, we
show that we can obtain high-order integrators for targeted families of
differential equations without the need for computing integrator coefficients
by hand. Moreover, we demonstrate that in certain cases we can obtain superior
performance to classical RK methods. This can be attributed to certain
properties of the ODE families being identified and exploited by the approach.
Overall, this work demonstrates an effective, learning-based approach to the
design of algorithms for the numerical solution of differential equations, an
approach that can be readily extended to other numerical tasks.
Related papers
- A resource-efficient model for deep kernel learning [0.0]
There are various approaches for accelerate learning computations with minimal loss of accuracy.
We describe a model-level decomposition approach that combines both the decomposition of the operators and the decomposition of the network.
We perform a feasibility analysis on the resulting algorithm, both in terms of its accuracy and scalability.
arXiv Detail & Related papers (2024-10-13T17:11:42Z) - Neural Control Variates with Automatic Integration [49.91408797261987]
This paper proposes a novel approach to construct learnable parametric control variates functions from arbitrary neural network architectures.
We use the network to approximate the anti-derivative of the integrand.
We apply our method to solve partial differential equations using the Walk-on-sphere algorithm.
arXiv Detail & Related papers (2024-09-23T06:04:28Z) - Engineered Ordinary Differential Equations as Classification Algorithm (EODECA): thorough characterization and testing [0.9786690381850358]
We present EODECA, a novel approach at the intersection of machine learning and dynamical systems theory.
EODECA's design incorporates the ability to embed stable attractors in the phase space, enhancing reliability and allowing for reversible dynamics.
We demonstrate EODECA's effectiveness on the MNIST and Fashion MNIST datasets, achieving impressive accuracies of $98.06%$ and $88.21%$, respectively.
arXiv Detail & Related papers (2023-12-22T13:34:18Z) - Spectral methods for Neural Integral Equations [0.6993026261767287]
We introduce a framework for neural integral equations based on spectral methods.
We show various theoretical guarantees regarding the approximation capabilities of the model.
We provide numerical experiments to demonstrate the practical effectiveness of the resulting model.
arXiv Detail & Related papers (2023-12-09T19:42:36Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Symbolically Solving Partial Differential Equations using Deep Learning [5.1964883240501605]
We describe a neural-based method for generating exact or approximate solutions to differential equations.
Unlike other neural methods, our system returns symbolic expressions that can be interpreted directly.
arXiv Detail & Related papers (2020-11-12T22:16:03Z) - A Neuro-Symbolic Method for Solving Differential and Functional
Equations [6.899578710832262]
We introduce a method for generating symbolic expressions to solve differential equations.
Unlike existing methods, our system does not require learning a language model over symbolic mathematics.
We show how the system can be effortlessly generalized to find symbolic solutions to other mathematical tasks.
arXiv Detail & Related papers (2020-11-04T17:13:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.