Mean Field Games on Weighted and Directed Graphs via Colored Digraphons
- URL: http://arxiv.org/abs/2209.03887v1
- Date: Thu, 8 Sep 2022 15:45:20 GMT
- Title: Mean Field Games on Weighted and Directed Graphs via Colored Digraphons
- Authors: Christian Fabian, Kai Cui, Heinz Koeppl
- Abstract summary: Graphon mean field games (GMFGs) provide a scalable and mathematically well-founded approach to learning problems.
Our paper introduces colored digraphon mean field games (CDMFGs) which allow for weighted and directed links between agents.
- Score: 26.405495663998828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of multi-agent reinforcement learning (MARL) has made considerable
progress towards controlling challenging multi-agent systems by employing
various learning methods. Numerous of these approaches focus on empirical and
algorithmic aspects of the MARL problems and lack a rigorous theoretical
foundation. Graphon mean field games (GMFGs) on the other hand provide a
scalable and mathematically well-founded approach to learning problems that
involve a large number of connected agents. In standard GMFGs, the connections
between agents are undirected, unweighted and invariant over time. Our paper
introduces colored digraphon mean field games (CDMFGs) which allow for weighted
and directed links between agents that are also adaptive over time. Thus,
CDMFGs are able to model more complex connections than standard GMFGs. Besides
a rigorous theoretical analysis including both existence and convergence
guarantees, we provide a learning scheme and illustrate our findings with an
epidemics model and a model of the systemic risk in financial markets.
Related papers
- Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL [57.745700271150454]
We study the sample complexity of reinforcement learning in Mean-Field Games (MFGs) with model-based function approximation.
We introduce the Partial Model-Based Eluder Dimension (P-MBED), a more effective notion to characterize the model class complexity.
arXiv Detail & Related papers (2024-02-08T14:54:47Z) - Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach [31.82185019324094]
Mean Field Games (MFGs) can be extended to Graphon MFGs (GMFGs) to include network structures between agents.
We introduce the novel concept of Graphex MFGs which builds on the graph theoretical concept of graphexes.
This hybrid graphex learning approach leverages that the system mainly consists of a highly connected core and a sparse periphery.
arXiv Detail & Related papers (2024-01-23T11:52:00Z) - Learning Discrete-Time Major-Minor Mean Field Games [61.09249862334384]
We propose a novel discrete time version of major-minor MFGs (M3FGs) and a learning algorithm based on fictitious play and partitioning the probability simplex.
M3FGs generalize MFGs with common noise and can handle not only random exogeneous environment states but also major players.
arXiv Detail & Related papers (2023-12-17T18:22:08Z) - Learning Sparse Graphon Mean Field Games [26.405495663998828]
Graphon mean field games (GMFGs) enable the scalable analysis of MARL problems that are otherwise intractable.
Our paper introduces a novel formulation of GMFGs, called LPGMFGs, which leverages the graph theoretical concept of $Lp$ graphons.
This especially includes power law networks which are empirically observed in various application areas and cannot be captured by standard graphons.
arXiv Detail & Related papers (2022-09-08T15:35:42Z) - Bridging Mean-Field Games and Normalizing Flows with Trajectory
Regularization [11.517089115158225]
Mean-field games (MFGs) are a modeling framework for systems with a large number of interacting agents.
Normalizing flows (NFs) are a family of deep generative models that compute data likelihoods by using an invertible mapping.
In this work, we unravel the connections between MFGs and NFs by contextualizing the training of an NF as solving the MFG.
arXiv Detail & Related papers (2022-06-30T02:44:39Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Reinforcement Learning for Mean Field Games, with Applications to
Economics [0.0]
Mean field games (MFG) and mean field control problems (MFC) are frameworks to study Nash equilibria or social optima in games with a continuum of agents.
We present a two timescale approach with RL for MFG and MFC, which relies on a unified Q-learning algorithm.
arXiv Detail & Related papers (2021-06-25T16:45:04Z) - Learning Gaussian Graphical Models with Latent Confounders [74.72998362041088]
We compare and contrast two strategies for inference in graphical models with latent confounders.
While these two approaches have similar goals, they are motivated by different assumptions about confounding.
We propose a new method, which combines the strengths of these two approaches.
arXiv Detail & Related papers (2021-05-14T00:53:03Z) - Equilibrium Refinements for Multi-Agent Influence Diagrams: Theory and
Practice [62.58588499193303]
Multi-agent influence diagrams (MAIDs) are a popular form of graphical model that, for certain classes of games, have been shown to offer key complexity and explainability advantages over traditional extensive form game (EFG) representations.
We extend previous work on MAIDs by introducing the concept of a MAID subgame, as well as subgame perfect and hand perfect equilibrium refinements.
arXiv Detail & Related papers (2021-02-09T18:20:50Z) - Global Convergence of Policy Gradient for Linear-Quadratic Mean-Field
Control/Game in Continuous Time [109.06623773924737]
We study the policy gradient method for the linear-quadratic mean-field control and game.
We show that it converges to the optimal solution at a linear rate, which is verified by a synthetic simulation.
arXiv Detail & Related papers (2020-08-16T06:34:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.