Convergence analysis of a quasi-Monte Carlo-based deep learning
algorithm for solving partial differential equations
- URL: http://arxiv.org/abs/2210.16196v1
- Date: Fri, 28 Oct 2022 15:06:57 GMT
- Title: Convergence analysis of a quasi-Monte Carlo-based deep learning
algorithm for solving partial differential equations
- Authors: Fengjiang Fu and Xiaoqun Wang
- Abstract summary: We propose to apply quasi-Monte Carlo (QMC) methods to the Deep Ritz Method (DRM) for solving the Neumann problems for the Poisson equation and the static Schr"odinger equation.
For error estimation, we decompose the error of using the deep learning algorithm to solve PDEs into the generalization error, the approximation error and the training error.
Numerical experiments show that the proposed method converges faster in all cases and the variances of the gradient estimators of randomized QMC-based DRM are much smaller than those of DRM.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning methods have achieved great success in solving partial
differential equations (PDEs), where the loss is often defined as an integral.
The accuracy and efficiency of these algorithms depend greatly on the
quadrature method. We propose to apply quasi-Monte Carlo (QMC) methods to the
Deep Ritz Method (DRM) for solving the Neumann problems for the Poisson
equation and the static Schr\"{o}dinger equation. For error estimation, we
decompose the error of using the deep learning algorithm to solve PDEs into the
generalization error, the approximation error and the training error. We
establish the upper bounds and prove that QMC-based DRM achieves an
asymptotically smaller error bound than DRM. Numerical experiments show that
the proposed method converges faster in all cases and the variances of the
gradient estimators of randomized QMC-based DRM are much smaller than those of
DRM, which illustrates the superiority of QMC in deep learning over MC.
Related papers
- Randomized Physics-Informed Machine Learning for Uncertainty
Quantification in High-Dimensional Inverse Problems [49.1574468325115]
We propose a physics-informed machine learning method for uncertainty quantification in high-dimensional inverse problems.
We show analytically and through comparison with Hamiltonian Monte Carlo that the rPICKLE posterior converges to the true posterior given by the Bayes rule.
arXiv Detail & Related papers (2023-12-11T07:33:16Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Deep learning numerical methods for high-dimensional fully nonlinear
PIDEs and coupled FBSDEs with jumps [26.28912742740653]
We propose a deep learning algorithm for solving high-dimensional parabolic integro-differential equations (PIDEs)
The jump-diffusion process are derived by a Brownian motion and an independent compensated Poisson random measure.
To derive the error estimates for this deep learning algorithm, the convergence of Markovian, the error bound of Euler time discretization, and the simulation error of deep learning algorithm are investigated.
arXiv Detail & Related papers (2023-01-30T13:55:42Z) - Multi-fidelity Monte Carlo: a pseudo-marginal approach [21.05263506153674]
A key challenge in applying Monte Carlo to scientific domains is computation.
Multi-fidelity MCMC algorithms combine models of varying fidelities in order to obtain an approximate target density.
We take a pseudo-marginal MCMC approach for multi-fidelity inference that utilizes a cheaper, randomized-fidelity unbiased estimator.
arXiv Detail & Related papers (2022-10-04T11:27:40Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Fast Doubly-Adaptive MCMC to Estimate the Gibbs Partition Function with
Weak Mixing Time Bounds [7.428782604099876]
A major obstacle to practical applications of Gibbs distributions is the need to estimate their partition functions.
We present a novel method for reducing the computational complexity of rigorously estimating the partition functions.
arXiv Detail & Related papers (2021-11-14T15:42:02Z) - Machine Learning For Elliptic PDEs: Fast Rate Generalization Bound,
Neural Scaling Law and Minimax Optimality [11.508011337440646]
We study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples.
To simplify the problem, we focus on a prototype elliptic PDE: the Schr"odinger equation on a hypercube with zero Dirichlet boundary condition.
We establish upper and lower bounds for both methods, which improves upon concurrently developed upper bounds for this problem.
arXiv Detail & Related papers (2021-10-13T17:26:31Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - The Seven-League Scheme: Deep learning for large time step Monte Carlo
simulations of stochastic differential equations [0.0]
We propose an accurate data-driven numerical scheme to solve Differential Equations (SDEs)
The SDE discretization is built up by means of a chaos expansion method on the basis of accurately determined (SC) points.
With a method called the compression-decompression and collocation technique, we can drastically reduce the number of neural network functions that have to be learned.
arXiv Detail & Related papers (2020-09-07T16:06:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.