Deep learning numerical methods for high-dimensional fully nonlinear
PIDEs and coupled FBSDEs with jumps
- URL: http://arxiv.org/abs/2301.12895v1
- Date: Mon, 30 Jan 2023 13:55:42 GMT
- Title: Deep learning numerical methods for high-dimensional fully nonlinear
PIDEs and coupled FBSDEs with jumps
- Authors: Wansheng Wang, Jie Wang, Jinping Li, Feifei Gao, Yi Fu
- Abstract summary: We propose a deep learning algorithm for solving high-dimensional parabolic integro-differential equations (PIDEs)
The jump-diffusion process are derived by a Brownian motion and an independent compensated Poisson random measure.
To derive the error estimates for this deep learning algorithm, the convergence of Markovian, the error bound of Euler time discretization, and the simulation error of deep learning algorithm are investigated.
- Score: 26.28912742740653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a deep learning algorithm for solving high-dimensional parabolic
integro-differential equations (PIDEs) and high-dimensional forward-backward
stochastic differential equations with jumps (FBSDEJs), where the
jump-diffusion process are derived by a Brownian motion and an independent
compensated Poisson random measure. In this novel algorithm, a pair of deep
neural networks for the approximations of the gradient and the integral kernel
is introduced in a crucial way based on deep FBSDE method. To derive the error
estimates for this deep learning algorithm, the convergence of Markovian
iteration, the error bound of Euler time discretization, and the simulation
error of deep learning algorithm are investigated. Two numerical examples are
provided to show the efficiency of this proposed algorithm.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - A forward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations [0.6040014326756179]
We present a novel forward differential deep learning-based algorithm for solving high-dimensional nonlinear backward differential equations (BSDEs)
Motivated by the fact that differential deep learning can efficiently approximate the labels and their derivatives with respect to inputs, we transform the BSDE problem into a differential deep learning problem.
The main idea of our algorithm is to discretize the integrals using the Euler-Maruyama method and approximate the unknown discrete solution triple using three deep neural networks.
arXiv Detail & Related papers (2024-08-10T19:34:03Z) - A Deep-Genetic Algorithm (Deep-GA) Approach for High-Dimensional
Nonlinear Parabolic Partial Differential Equations [0.0]
We propose a new method, called a deep-genetic algorithm (deep-GA) to accelerate the performance of the so-called deep-BSDE method.
Recognizing the sensitivity of the solver to the initial guess selection, we embed a genetic algorithm (GA) into the solver to optimize the selection.
We show that our method provides comparable accuracy with significantly improved computational efficiency.
arXiv Detail & Related papers (2023-11-20T06:35:23Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - A deep branching solver for fully nonlinear partial differential
equations [0.1474723404975345]
We present a multidimensional deep learning implementation of a branching algorithm for the numerical solution of fully nonlinear PDEs.
This approach is designed to tackle functional nonlinearities involving gradient terms of any orders.
arXiv Detail & Related papers (2022-03-07T09:46:46Z) - Fast Projected Newton-like Method for Precision Matrix Estimation under
Total Positivity [15.023842222803058]
Current algorithms are designed using the block coordinate descent method or the proximal point algorithm.
We propose a novel algorithm based on the two-metric projection method, incorporating a carefully designed search direction and variable partitioning scheme.
Experimental results on synthetic and real-world datasets demonstrate that our proposed algorithm provides a significant improvement in computational efficiency compared to the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-03T14:39:10Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Random-reshuffled SARAH does not need a full gradient computations [61.85897464405715]
The StochAstic Recursive grAdientritHm (SARAH) algorithm is a variance reduced variant of the Gradient Descent (SGD) algorithm.
In this paper, we remove the necessity of a full gradient.
The aggregated gradients serve as an estimate of a full gradient in the SARAH algorithm.
arXiv Detail & Related papers (2021-11-26T06:00:44Z) - Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration [63.15453821022452]
Recent developments in approaches based on deep learning have achieved sub-second runtimes for DiffIR.
We propose a simple iterative scheme that functionally composes intermediate non-stationary velocity fields.
We then propose a convex optimisation model that uses a regularisation term of arbitrary order to impose smoothness on these velocity fields.
arXiv Detail & Related papers (2021-09-26T19:56:45Z) - Deep learning based numerical approximation algorithms for stochastic
partial differential equations and high-dimensional nonlinear filtering
problems [4.164845768197489]
In this article we introduce and study a deep learning based approximation algorithm for solutions of partial differential equations (SPDEs)
We employ a deep neural network for every realization of the driving noise process of the SPDE to approximate the solution process of the SPDE under consideration.
In each of these SPDEs the proposed approximation algorithm produces accurate results with short run times in up to 50 space dimensions.
arXiv Detail & Related papers (2020-12-02T13:25:35Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.