Deep Backward and Galerkin Methods for the Finite State Master Equation
- URL: http://arxiv.org/abs/2403.04975v1
- Date: Fri, 8 Mar 2024 01:12:11 GMT
- Title: Deep Backward and Galerkin Methods for the Finite State Master Equation
- Authors: Asaf Cohen, Mathieu Lauri\`ere, Ethan Zell
- Abstract summary: This paper proposes and analyzes two neural network methods to solve the master equation for finite-state mean field games.
We prove two types of results: there exist neural networks that make the algorithms' loss functions arbitrarily small, and conversely, if the losses are small, then the neural networks are good approximations of the master equation's solution.
- Score: 12.570464662548787
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes and analyzes two neural network methods to solve the
master equation for finite-state mean field games (MFGs). Solving MFGs provides
approximate Nash equilibria for stochastic, differential games with finite but
large populations of agents. The master equation is a partial differential
equation (PDE) whose solution characterizes MFG equilibria for any possible
initial distribution. The first method we propose relies on backward induction
in a time component while the second method directly tackles the PDE without
discretizing time. For both approaches, we prove two types of results: there
exist neural networks that make the algorithms' loss functions arbitrarily
small, and conversely, if the losses are small, then the neural networks are
good approximations of the master equation's solution. We conclude the paper
with numerical experiments on benchmark problems from the literature up to
dimension 15, and a comparison with solutions computed by a classical method
for fixed initial distributions.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Solving partial differential equations with sampled neural networks [1.8590821261905535]
Approximation of solutions to partial differential equations (PDE) is an important problem in computational science and engineering.
We discuss how sampling the hidden weights and biases of the ansatz network from data-agnostic and data-dependent probability distributions allows us to progress on both challenges.
arXiv Detail & Related papers (2024-05-31T14:24:39Z) - Deep Learning for Mean Field Games with non-separable Hamiltonians [0.0]
This paper introduces a new method for solving high-dimensional Mean Field Games (MFGs)
We achieve this by using two neural networks to approximate the unknown solutions of the MFG system and forward-backward conditions.
Our method is efficient, even with a small number of iterations, and is capable of handling up to 300 dimensions with a single layer.
arXiv Detail & Related papers (2023-01-07T15:39:48Z) - Physics-Informed Neural Network Method for Parabolic Differential
Equations with Sharply Perturbed Initial Conditions [68.8204255655161]
We develop a physics-informed neural network (PINN) model for parabolic problems with a sharply perturbed initial condition.
Localized large gradients in the ADE solution make the (common in PINN) Latin hypercube sampling of the equation's residual highly inefficient.
We propose criteria for weights in the loss function that produce a more accurate PINN solution than those obtained with the weights selected via other methods.
arXiv Detail & Related papers (2022-08-18T05:00:24Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Solving Partial Differential Equations with Point Source Based on
Physics-Informed Neural Networks [33.18757454787517]
In recent years, deep learning technology has been used to solve partial differential equations (PDEs)
We propose a universal solution to tackle this problem with three novel techniques.
We evaluate the proposed method with three representative PDEs, and the experimental results show that our method outperforms existing deep learning-based methods with respect to the accuracy, the efficiency and the versatility.
arXiv Detail & Related papers (2021-11-02T06:39:54Z) - Solving PDEs on Unknown Manifolds with Machine Learning [8.220217498103315]
This paper presents a mesh-free computational framework and machine learning theory for solving elliptic PDEs on unknown manifold.
We show that the proposed NN solver can robustly generalize the PDE on new data points with errors that are almost identical to generalizations on new data points.
arXiv Detail & Related papers (2021-06-12T03:55:15Z) - A semigroup method for high dimensional elliptic PDEs and eigenvalue
problems based on neural networks [1.52292571922932]
We propose a semigroup computation method for solving high-dimensional elliptic partial differential equations (PDEs) and the associated eigenvalue problems based on neural networks.
For the PDE problems, we reformulate the original equations as variational problems with the help of semigroup operators and then solve the variational problems with neural network (NN) parameterization.
For eigenvalue problems, a primal-dual method is proposed, resolving the constraint with a scalar dual variable.
arXiv Detail & Related papers (2021-05-07T19:49:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.