Deep Learning for Mean Field Games with non-separable Hamiltonians
- URL: http://arxiv.org/abs/2301.02877v2
- Date: Tue, 18 Jul 2023 01:34:22 GMT
- Title: Deep Learning for Mean Field Games with non-separable Hamiltonians
- Authors: Mouhcine Assouli and Badr Missaoui
- Abstract summary: This paper introduces a new method for solving high-dimensional Mean Field Games (MFGs)
We achieve this by using two neural networks to approximate the unknown solutions of the MFG system and forward-backward conditions.
Our method is efficient, even with a small number of iterations, and is capable of handling up to 300 dimensions with a single layer.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a new method based on Deep Galerkin Methods (DGMs) for
solving high-dimensional stochastic Mean Field Games (MFGs). We achieve this by
using two neural networks to approximate the unknown solutions of the MFG
system and forward-backward conditions. Our method is efficient, even with a
small number of iterations, and is capable of handling up to 300 dimensions
with a single layer, which makes it faster than other approaches. In contrast,
methods based on Generative Adversarial Networks (GANs) cannot solve MFGs with
non-separable Hamiltonians. We demonstrate the effectiveness of our approach by
applying it to a traffic flow problem, which was previously solved using the
Newton iteration method only in the deterministic case. We compare the results
of our method to analytical solutions and previous approaches, showing its
efficiency. We also prove the convergence of our neural network approximation
with a single hidden layer using the universal approximation theorem.
Related papers
- Deep Backward and Galerkin Methods for the Finite State Master Equation [12.570464662548787]
This paper proposes and analyzes two neural network methods to solve the master equation for finite-state mean field games.
We prove two types of results: there exist neural networks that make the algorithms' loss functions arbitrarily small, and conversely, if the losses are small, then the neural networks are good approximations of the master equation's solution.
arXiv Detail & Related papers (2024-03-08T01:12:11Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - $r-$Adaptive Deep Learning Method for Solving Partial Differential
Equations [0.685316573653194]
We introduce an $r-$adaptive algorithm to solve Partial Differential Equations using a Deep Neural Network.
The proposed method restricts to tensor product meshes and optimize the boundary node locations in one dimension, from which we build two- or three-dimensional meshes.
arXiv Detail & Related papers (2022-10-19T21:38:46Z) - Conjugate Gradient Method for Generative Adversarial Networks [0.0]
It is not feasible to calculate the Jensen-Shannon divergence of the density function of the data and the density function of the model of deep neural networks.
Generative adversarial networks (GANs) can be used to formulate this problem as a discriminative problem with two models, a generator and a discriminator.
We propose to apply the conjugate gradient method to solve the local Nash equilibrium problem in GANs.
arXiv Detail & Related papers (2022-03-28T04:44:45Z) - Scalable Deep Reinforcement Learning Algorithms for Mean Field Games [60.550128966505625]
Mean Field Games (MFGs) have been introduced to efficiently approximate games with very large populations of strategic agents.
Recently, the question of learning equilibria in MFGs has gained momentum, particularly using model-free reinforcement learning (RL) methods.
Existing algorithms to solve MFGs require the mixing of approximated quantities such as strategies or $q$-values.
We propose two methods to address this shortcoming. The first one learns a mixed strategy from distillation of historical data into a neural network and is applied to the Fictitious Play algorithm.
The second one is an online mixing method based on
arXiv Detail & Related papers (2022-03-22T18:10:32Z) - A Mini-Block Natural Gradient Method for Deep Neural Networks [12.48022619079224]
We propose and analyze the convergence of an approximate natural gradient method, mini-block Fisher (MBF)
Our novel approach utilizes the parallelism of generalization to efficiently perform on the large number of matrices in each layer.
arXiv Detail & Related papers (2022-02-08T20:01:48Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Scaling Up Bayesian Uncertainty Quantification for Inverse Problems
using Deep Neural Networks [2.455468619225742]
We propose a novel CES approach for Bayesian inference based on deep neural network (DNN) models for the emulation phase.
The resulting algorithm is not only computationally more efficient, but also less sensitive to the training set.
Overall, our method, henceforth called emphReduced- Dimension Emulative Autoencoder Monte Carlo (DREAM) algorithm, is able to scale Bayesian UQ up to thousands of dimensions in physics-constrained inverse problems.
arXiv Detail & Related papers (2021-01-11T14:18:38Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Disentangling the Gauss-Newton Method and Approximate Inference for
Neural Networks [96.87076679064499]
We disentangle the generalized Gauss-Newton and approximate inference for Bayesian deep learning.
We find that the Gauss-Newton method simplifies the underlying probabilistic model significantly.
The connection to Gaussian processes enables new function-space inference algorithms.
arXiv Detail & Related papers (2020-07-21T17:42:58Z) - Effective Version Space Reduction for Convolutional Neural Networks [61.84773892603885]
In active learning, sampling bias could pose a serious inconsistency problem and hinder the algorithm from finding the optimal hypothesis.
We examine active learning with convolutional neural networks through the principled lens of version space reduction.
arXiv Detail & Related papers (2020-06-22T17:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.