Learning High-Dimensional McKean-Vlasov Forward-Backward Stochastic
Differential Equations with General Distribution Dependence
- URL: http://arxiv.org/abs/2204.11924v3
- Date: Mon, 18 Sep 2023 18:25:23 GMT
- Title: Learning High-Dimensional McKean-Vlasov Forward-Backward Stochastic
Differential Equations with General Distribution Dependence
- Authors: Jiequn Han, Ruimeng Hu, Jihao Long
- Abstract summary: We propose a novel deep learning method for computing MV-FBSDEs with a general form of mean-field interactions.
We use deep neural networks to solve standard BSDEs and approximate coefficient functions in order to solve high-dimensional MV-FBSDEs.
- Score: 6.253771639590562
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the core problems in mean-field control and mean-field games is to
solve the corresponding McKean-Vlasov forward-backward stochastic differential
equations (MV-FBSDEs). Most existing methods are tailored to special cases in
which the mean-field interaction only depends on expectation or other moments
and thus inadequate to solve problems when the mean-field interaction has full
distribution dependence.
In this paper, we propose a novel deep learning method for computing
MV-FBSDEs with a general form of mean-field interactions. Specifically, built
on fictitious play, we recast the problem into repeatedly solving standard
FBSDEs with explicit coefficient functions. These coefficient functions are
used to approximate the MV-FBSDEs' model coefficients with full distribution
dependence, and are updated by solving another supervising learning problem
using training data simulated from the last iteration's FBSDE solutions. We use
deep neural networks to solve standard BSDEs and approximate coefficient
functions in order to solve high-dimensional MV-FBSDEs. Under proper
assumptions on the learned functions, we prove that the convergence of the
proposed method is free of the curse of dimensionality (CoD) by using a class
of integral probability metrics previously developed in [Han, Hu and Long,
arXiv:2104.12036]. The proved theorem shows the advantage of the method in high
dimensions. We present the numerical performance in high-dimensional MV-FBSDE
problems, including a mean-field game example of the well-known Cucker-Smale
model whose cost depends on the full distribution of the forward process.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Learning-based Multi-continuum Model for Multiscale Flow Problems [24.93423649301792]
We propose a learning-based multi-continuum model to enrich the homogenized equation and improve the accuracy of the single model for multiscale problems.
Our proposed learning-based multi-continuum model can resolve multiple interacted media within each coarse grid block and describe the mass transfer among them.
arXiv Detail & Related papers (2024-03-21T02:30:56Z) - Deep Reinforcement Learning for Adaptive Mesh Refinement [0.9281671380673306]
We train policy networks for AMR strategy directly from numerical simulation.
The training process does not require an exact solution or a high-fidelity ground truth to the partial differential equation.
We show that the deep reinforcement learning policies are competitive with common AMRs, generalize well across problem classes, and strike a favorable balance between accuracy and cost.
arXiv Detail & Related papers (2022-09-25T23:45:34Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - A Bayesian Multiscale Deep Learning Framework for Flows in Random Media [0.0]
Fine-scale simulation of complex systems governed by multiscale partial differential equations (PDEs) is computationally expensive and various multiscale methods have been developed for addressing such problems.
In this work, we introduce a novel hybrid deep-learning and multiscale approach for multiscale PDEs with limited training data.
For demonstration purposes, we focus on a porous media flow problem. We use an image-to-image supervised deep learning model to learn the mapping between the input permeability field and the multiscale basis functions.
arXiv Detail & Related papers (2021-03-08T23:11:46Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Multi-Fidelity High-Order Gaussian Processes for Physical Simulation [24.033468062984458]
High-fidelity partial differential equations (PDEs) are more expensive than low-fidelity ones.
We propose Multi-Fidelity High-Order Gaussian Process (MFHoGP) that can capture complex correlations.
MFHoGP propagates bases throughout fidelities to fuse information, and places a deep matrix GP prior over the basis weights.
arXiv Detail & Related papers (2020-06-08T22:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.