Alternating the Population and Control Neural Networks to Solve
High-Dimensional Stochastic Mean-Field Games
- URL: http://arxiv.org/abs/2002.10113v4
- Date: Fri, 14 Jul 2023 05:35:11 GMT
- Title: Alternating the Population and Control Neural Networks to Solve
High-Dimensional Stochastic Mean-Field Games
- Authors: Alex Tong Lin, Samy Wu Fung, Wuchen Li, Levon Nurbekyan, Stanley J.
Osher
- Abstract summary: We present an alternating population and agent control neural network for solving mean field games (MFGs)
Our algorithm is geared toward high-dimensional instances of MFGs that are beyond reach with existing solution methods.
We show the potential of our method on up to 100-dimensional MFG problems.
- Score: 9.909883019034613
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present APAC-Net, an alternating population and agent control neural
network for solving stochastic mean field games (MFGs). Our algorithm is geared
toward high-dimensional instances of MFGs that are beyond reach with existing
solution methods. We achieve this in two steps. First, we take advantage of the
underlying variational primal-dual structure that MFGs exhibit and phrase it as
a convex-concave saddle point problem. Second, we parameterize the value and
density functions by two neural networks, respectively. By phrasing the problem
in this manner, solving the MFG can be interpreted as a special case of
training a generative adversarial network (GAN). We show the potential of our
method on up to 100-dimensional MFG problems.
Related papers
- Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Deep Learning for Mean Field Games with non-separable Hamiltonians [0.0]
This paper introduces a new method for solving high-dimensional Mean Field Games (MFGs)
We achieve this by using two neural networks to approximate the unknown solutions of the MFG system and forward-backward conditions.
Our method is efficient, even with a small number of iterations, and is capable of handling up to 300 dimensions with a single layer.
arXiv Detail & Related papers (2023-01-07T15:39:48Z) - Bridging Mean-Field Games and Normalizing Flows with Trajectory
Regularization [11.517089115158225]
Mean-field games (MFGs) are a modeling framework for systems with a large number of interacting agents.
Normalizing flows (NFs) are a family of deep generative models that compute data likelihoods by using an invertible mapping.
In this work, we unravel the connections between MFGs and NFs by contextualizing the training of an NF as solving the MFG.
arXiv Detail & Related papers (2022-06-30T02:44:39Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Sharp asymptotics on the compression of two-layer neural networks [19.683271092724937]
We study the compression of a target two-layer neural network with N nodes into a compressed network with M N nodes.
We conjecture that the optimum simplified optimization problem is achieved by taking weights on the Equi Tight Frame (ETF)
arXiv Detail & Related papers (2022-05-17T09:45:23Z) - Scalable Deep Reinforcement Learning Algorithms for Mean Field Games [60.550128966505625]
Mean Field Games (MFGs) have been introduced to efficiently approximate games with very large populations of strategic agents.
Recently, the question of learning equilibria in MFGs has gained momentum, particularly using model-free reinforcement learning (RL) methods.
Existing algorithms to solve MFGs require the mixing of approximated quantities such as strategies or $q$-values.
We propose two methods to address this shortcoming. The first one learns a mixed strategy from distillation of historical data into a neural network and is applied to the Fictitious Play algorithm.
The second one is an online mixing method based on
arXiv Detail & Related papers (2022-03-22T18:10:32Z) - Concave Utility Reinforcement Learning: the Mean-field Game viewpoint [42.403650997341806]
Concave Utility Reinforcement Learning (CURL) extends RL from linear to concave utilities in the occupancy measure induced by the agent's policy.
This more general paradigm invalidates the classical Bellman equations, and calls for new algorithms.
We show that CURL is a subclass of Mean-field Games (MFGs)
arXiv Detail & Related papers (2021-06-07T16:51:07Z) - Mean Field Game GAN [55.445402222849474]
We propose a novel mean field games (MFGs) based GAN(generative adversarial network) framework.
We utilize the Hopf formula in density space to rewrite MFGs as a primal-dual problem so that we are able to train the model via neural networks and samples.
arXiv Detail & Related papers (2021-03-14T06:34:38Z) - Regressive Domain Adaptation for Unsupervised Keypoint Detection [67.2950306888855]
Domain adaptation (DA) aims at transferring knowledge from a labeled source domain to an unlabeled target domain.
We present a method of regressive domain adaptation (RegDA) for unsupervised keypoint detection.
Our method brings large improvement by 8% to 11% in terms of PCK on different datasets.
arXiv Detail & Related papers (2021-03-10T16:45:22Z) - Graph Neural Networks for Motion Planning [108.51253840181677]
We present two techniques, GNNs over dense fixed graphs for low-dimensional problems and sampling-based GNNs for high-dimensional problems.
We examine the ability of a GNN to tackle planning problems such as identifying critical nodes or learning the sampling distribution in Rapidly-exploring Random Trees (RRT)
Experiments with critical sampling, a pendulum and a six DoF robot arm show GNNs improve on traditional analytic methods as well as learning approaches using fully-connected or convolutional neural networks.
arXiv Detail & Related papers (2020-06-11T08:19:06Z) - Connecting GANs, MFGs, and OT [4.530876736231948]
Generative adversarial networks (GANs) have enjoyed tremendous success in image generation and processing.
This paper analyzes GANs from the perspectives of mean-field games (MFGs) and optimal transport.
arXiv Detail & Related papers (2020-02-10T22:14:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.