A Deep-Genetic Algorithm (Deep-GA) Approach for High-Dimensional
Nonlinear Parabolic Partial Differential Equations
- URL: http://arxiv.org/abs/2311.11558v1
- Date: Mon, 20 Nov 2023 06:35:23 GMT
- Title: A Deep-Genetic Algorithm (Deep-GA) Approach for High-Dimensional
Nonlinear Parabolic Partial Differential Equations
- Authors: Endah Rokhmati Merdika Putri, Muhammad Luthfi Shahab, Mohammad Iqbal,
Imam Mukhlash, Amirul Hakam, Lutfi Mardianto, Hadi Susanto
- Abstract summary: We propose a new method, called a deep-genetic algorithm (deep-GA) to accelerate the performance of the so-called deep-BSDE method.
Recognizing the sensitivity of the solver to the initial guess selection, we embed a genetic algorithm (GA) into the solver to optimize the selection.
We show that our method provides comparable accuracy with significantly improved computational efficiency.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new method, called a deep-genetic algorithm (deep-GA), to
accelerate the performance of the so-called deep-BSDE method, which is a deep
learning algorithm to solve high dimensional partial differential equations
through their corresponding backward stochastic differential equations (BSDEs).
Recognizing the sensitivity of the solver to the initial guess selection, we
embed a genetic algorithm (GA) into the solver to optimize the selection. We
aim to achieve faster convergence for the nonlinear PDEs on a broader interval
than deep-BSDE. Our proposed method is applied to two nonlinear parabolic PDEs,
i.e., the Black-Scholes (BS) equation with default risk and the
Hamilton-Jacobi-Bellman (HJB) equation. We compare the results of our method
with those of the deep-BSDE and show that our method provides comparable
accuracy with significantly improved computational efficiency.
Related papers
- A forward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations [0.6040014326756179]
We present a novel forward differential deep learning-based algorithm for solving high-dimensional nonlinear backward differential equations (BSDEs)
Motivated by the fact that differential deep learning can efficiently approximate the labels and their derivatives with respect to inputs, we transform the BSDE problem into a differential deep learning problem.
The main idea of our algorithm is to discretize the integrals using the Euler-Maruyama method and approximate the unknown discrete solution triple using three deep neural networks.
arXiv Detail & Related papers (2024-08-10T19:34:03Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Sparse Cholesky Factorization for Solving Nonlinear PDEs via Gaussian
Processes [3.750429354590631]
We present a sparse Cholesky factorization algorithm for dense kernel matrices.
We numerically illustrate our algorithm's near-linear space/time complexity for a broad class of nonlinear PDEs.
arXiv Detail & Related papers (2023-04-03T18:35:28Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Deep learning numerical methods for high-dimensional fully nonlinear
PIDEs and coupled FBSDEs with jumps [26.28912742740653]
We propose a deep learning algorithm for solving high-dimensional parabolic integro-differential equations (PIDEs)
The jump-diffusion process are derived by a Brownian motion and an independent compensated Poisson random measure.
To derive the error estimates for this deep learning algorithm, the convergence of Markovian, the error bound of Euler time discretization, and the simulation error of deep learning algorithm are investigated.
arXiv Detail & Related papers (2023-01-30T13:55:42Z) - A Forward Propagation Algorithm for Online Optimization of Nonlinear
Stochastic Differential Equations [1.116812194101501]
We study the convergence of the forward propagation algorithm for nonlinear dissipative SDEs.
We prove bounds on the solution of a partial differential equation (PDE) for the expected time integral of the algorithm's fluctuations around the direction of steepest descent.
Our main result is a convergence theorem for the forward propagation algorithm for nonlinear dissipative SDEs.
arXiv Detail & Related papers (2022-07-10T16:06:42Z) - Actor-Critic Algorithm for High-dimensional Partial Differential
Equations [1.5644600570264835]
We develop a deep learning model to solve high-dimensional nonlinear parabolic partial differential equations.
The Markovian property of the BSDE is utilized in designing our neural network architecture.
We demonstrate those improvements by solving a few well-known classes of PDEs.
arXiv Detail & Related papers (2020-10-07T20:53:24Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.