Robust SDE-Based Variational Formulations for Solving Linear PDEs via
Deep Learning
- URL: http://arxiv.org/abs/2206.10588v1
- Date: Tue, 21 Jun 2022 17:59:39 GMT
- Title: Robust SDE-Based Variational Formulations for Solving Linear PDEs via
Deep Learning
- Authors: Lorenz Richter, Julius Berner
- Abstract summary: Combination of Monte Carlo methods and deep learning has led to efficient algorithms for solving partial differential equations (PDEs) in high dimensions.
Related learning problems are often stated as variational formulations based on associated differential equations (SDEs)
It is therefore crucial to rely on adequate gradient estimators that exhibit low variance in order to reach convergence accurately and swiftly.
- Score: 6.1678491628787455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The combination of Monte Carlo methods and deep learning has recently led to
efficient algorithms for solving partial differential equations (PDEs) in high
dimensions. Related learning problems are often stated as variational
formulations based on associated stochastic differential equations (SDEs),
which allow the minimization of corresponding losses using gradient-based
optimization methods. In respective numerical implementations it is therefore
crucial to rely on adequate gradient estimators that exhibit low variance in
order to reach convergence accurately and swiftly. In this article, we
rigorously investigate corresponding numerical aspects that appear in the
context of linear Kolmogorov PDEs. In particular, we systematically compare
existing deep learning approaches and provide theoretical explanations for
their performances. Subsequently, we suggest novel methods that can be shown to
be more robust both theoretically and numerically, leading to substantial
performance improvements.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Sequential-in-time training of nonlinear parametrizations for solving time-dependent partial differential equations [21.992668884092055]
This work shows that sequential-in-time training methods can be understood broadly as either optimize-then-discretize (OtD) or discretize-then-optimize (DtO) schemes.
arXiv Detail & Related papers (2024-04-01T14:45:16Z) - A Gaussian Process Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations [0.0]
We introduce kernel-weighted Corrective Residuals (CoRes) to integrate the strengths of kernel methods and deep NNs for solving nonlinear PDE systems.
CoRes consistently outperforms competing methods in solving a broad range of benchmark problems.
We believe our findings have the potential to spark a renewed interest in leveraging kernel methods for solving PDEs.
arXiv Detail & Related papers (2024-01-07T14:09:42Z) - Amortized Reparametrization: Efficient and Scalable Variational
Inference for Latent SDEs [3.2634122554914002]
We consider the problem of inferring latent differential equations with a time and memory cost that scales independently with the amount of data, the total length of the time series, and the stiffness of the approximate differential equations.
This is in stark contrast to typical methods for inferring latent differential equations which, despite their constant memory cost, have a time complexity that is heavily dependent on the stiffness of the approximate differential equation.
arXiv Detail & Related papers (2023-12-16T22:27:36Z) - From continuous-time formulations to discretization schemes: tensor
trains and robust regression for BSDEs and parabolic PDEs [3.785123406103385]
We argue that tensor trains provide an appealing framework for parabolic PDEs.
We develop iterative schemes, which differ in terms of computational efficiency and robustness.
We demonstrate both theoretically and numerically that our methods can achieve a favorable trade-off between accuracy and computational efficiency.
arXiv Detail & Related papers (2023-07-28T11:44:06Z) - Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence [65.63201894457404]
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of non-linear differential equations.
The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations.
arXiv Detail & Related papers (2023-05-24T20:43:47Z) - Learning differentiable solvers for systems with hard constraints [48.54197776363251]
We introduce a practical method to enforce partial differential equation (PDE) constraints for functions defined by neural networks (NNs)
We develop a differentiable PDE-constrained layer that can be incorporated into any NN architecture.
Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective.
arXiv Detail & Related papers (2022-07-18T15:11:43Z) - Last-Iterate Convergence of Saddle-Point Optimizers via High-Resolution
Differential Equations [83.3201889218775]
Several widely-used first-order saddle-point optimization methods yield an identical continuous-time ordinary differential equation (ODE) when derived naively.
However, the convergence properties of these methods are qualitatively different, even on simple bilinear games.
We adopt a framework studied in fluid dynamics to design differential equation models for several saddle-point optimization methods.
arXiv Detail & Related papers (2021-12-27T18:31:34Z) - Adversarial Multi-task Learning Enhanced Physics-informed Neural
Networks for Solving Partial Differential Equations [9.823102211212582]
We introduce the novel approach of employing multi-task learning techniques, the uncertainty-weighting loss and the gradients surgery, in the context of learning PDE solutions.
In the experiments, our proposed methods are found to be effective and reduce the error on the unseen data points as compared to the previous approaches.
arXiv Detail & Related papers (2021-04-29T13:17:46Z) - Efficient Learning of Generative Models via Finite-Difference Score
Matching [111.55998083406134]
We present a generic strategy to efficiently approximate any-order directional derivative with finite difference.
Our approximation only involves function evaluations, which can be executed in parallel, and no gradient computations.
arXiv Detail & Related papers (2020-07-07T10:05:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.