From continuous-time formulations to discretization schemes: tensor
trains and robust regression for BSDEs and parabolic PDEs
- URL: http://arxiv.org/abs/2307.15496v1
- Date: Fri, 28 Jul 2023 11:44:06 GMT
- Title: From continuous-time formulations to discretization schemes: tensor
trains and robust regression for BSDEs and parabolic PDEs
- Authors: Lorenz Richter, Leon Sallandt, Nikolas N\"usken
- Abstract summary: We argue that tensor trains provide an appealing framework for parabolic PDEs.
We develop iterative schemes, which differ in terms of computational efficiency and robustness.
We demonstrate both theoretically and numerically that our methods can achieve a favorable trade-off between accuracy and computational efficiency.
- Score: 3.785123406103385
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The numerical approximation of partial differential equations (PDEs) poses
formidable challenges in high dimensions since classical grid-based methods
suffer from the so-called curse of dimensionality. Recent attempts rely on a
combination of Monte Carlo methods and variational formulations, using neural
networks for function approximation. Extending previous work (Richter et al.,
2021), we argue that tensor trains provide an appealing framework for parabolic
PDEs: The combination of reformulations in terms of backward stochastic
differential equations and regression-type methods holds the promise of
leveraging latent low-rank structures, enabling both compression and efficient
computation. Emphasizing a continuous-time viewpoint, we develop iterative
schemes, which differ in terms of computational efficiency and robustness. We
demonstrate both theoretically and numerically that our methods can achieve a
favorable trade-off between accuracy and computational efficiency. While
previous methods have been either accurate or fast, we have identified a novel
numerical strategy that can often combine both of these aspects.
Related papers
- A Training-Free Conditional Diffusion Model for Learning Stochastic Dynamical Systems [10.820654486318336]
This study introduces a training-free conditional diffusion model for learning unknown differential equations (SDEs) using data.
The proposed approach addresses key challenges in computational efficiency and accuracy for modeling SDEs.
The learned models exhibit significant improvements in predicting both short-term and long-term behaviors of unknown systems.
arXiv Detail & Related papers (2024-10-04T03:07:36Z) - Sequential-in-time training of nonlinear parametrizations for solving time-dependent partial differential equations [21.992668884092055]
This work shows that sequential-in-time training methods can be understood broadly as either optimize-then-discretize (OtD) or discretize-then-optimize (DtO) schemes.
arXiv Detail & Related papers (2024-04-01T14:45:16Z) - Amortized Reparametrization: Efficient and Scalable Variational
Inference for Latent SDEs [3.2634122554914002]
We consider the problem of inferring latent differential equations with a time and memory cost that scales independently with the amount of data, the total length of the time series, and the stiffness of the approximate differential equations.
This is in stark contrast to typical methods for inferring latent differential equations which, despite their constant memory cost, have a time complexity that is heavily dependent on the stiffness of the approximate differential equation.
arXiv Detail & Related papers (2023-12-16T22:27:36Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence [65.63201894457404]
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of non-linear differential equations.
The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations.
arXiv Detail & Related papers (2023-05-24T20:43:47Z) - Robust SDE-Based Variational Formulations for Solving Linear PDEs via
Deep Learning [6.1678491628787455]
Combination of Monte Carlo methods and deep learning has led to efficient algorithms for solving partial differential equations (PDEs) in high dimensions.
Related learning problems are often stated as variational formulations based on associated differential equations (SDEs)
It is therefore crucial to rely on adequate gradient estimators that exhibit low variance in order to reach convergence accurately and swiftly.
arXiv Detail & Related papers (2022-06-21T17:59:39Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Solving high-dimensional parabolic PDEs using the tensor train format [1.1470070927586016]
We argue that tensor trains provide an appealing approximation framework for parabolic PDEs.
We develop novel iterative schemes, involving either explicit and fast or implicit and accurate updates.
arXiv Detail & Related papers (2021-02-23T18:04:00Z) - Efficient Learning of Generative Models via Finite-Difference Score
Matching [111.55998083406134]
We present a generic strategy to efficiently approximate any-order directional derivative with finite difference.
Our approximation only involves function evaluations, which can be executed in parallel, and no gradient computations.
arXiv Detail & Related papers (2020-07-07T10:05:01Z) - Reintroducing Straight-Through Estimators as Principled Methods for
Stochastic Binary Networks [85.94999581306827]
Training neural networks with binary weights and activations is a challenging problem due to the lack of gradients and difficulty of optimization over discrete weights.
Many successful experimental results have been achieved with empirical straight-through (ST) approaches.
At the same time, ST methods can be truly derived as estimators in the binary network (SBN) model with Bernoulli weights.
arXiv Detail & Related papers (2020-06-11T23:58:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.