Learning Sparse Graphon Mean Field Games
- URL: http://arxiv.org/abs/2209.03880v1
- Date: Thu, 8 Sep 2022 15:35:42 GMT
- Title: Learning Sparse Graphon Mean Field Games
- Authors: Christian Fabian, Kai Cui, Heinz Koeppl
- Abstract summary: Graphon mean field games (GMFGs) enable the scalable analysis of MARL problems that are otherwise intractable.
Our paper introduces a novel formulation of GMFGs, called LPGMFGs, which leverages the graph theoretical concept of $Lp$ graphons.
This especially includes power law networks which are empirically observed in various application areas and cannot be captured by standard graphons.
- Score: 26.405495663998828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although the field of multi-agent reinforcement learning (MARL) has made
considerable progress in the last years, solving systems with a large number of
agents remains a hard challenge. Graphon mean field games (GMFGs) enable the
scalable analysis of MARL problems that are otherwise intractable. By the
mathematical structure of graphons, this approach is limited to dense graphs
which are insufficient to describe many real-world networks such as power law
graphs. Our paper introduces a novel formulation of GMFGs, called LPGMFGs,
which leverages the graph theoretical concept of $L^p$ graphons and provides a
machine learning tool to efficiently and accurately approximate solutions for
sparse network problems. This especially includes power law networks which are
empirically observed in various application areas and cannot be captured by
standard graphons. We derive theoretical existence and convergence guarantees
and give empirical examples that demonstrate the accuracy of our learning
approach for systems with many agents. Furthermore, we rigorously extend the
Online Mirror Descent (OMD) learning algorithm to our setup to accelerate
learning speed, allow for agent interaction through the mean field in the
transition kernel, and empirically show its capabilities. In general, we
provide a scalable, mathematically well-founded machine learning approach to a
large class of otherwise intractable problems of great relevance in numerous
research fields.
Related papers
- Scalable and Accurate Graph Reasoning with LLM-based Multi-Agents [27.4884498301785]
We introduce GraphAgent-Reasoner, a fine-tuning-free framework for explicit and precise graph reasoning.
Inspired by distributed graph computation theory, our framework decomposes graph problems into smaller, node-centric tasks that are distributed among multiple agents.
Our framework demonstrates the capability to handle real-world graph reasoning applications such as webpage importance analysis.
arXiv Detail & Related papers (2024-10-07T15:34:14Z) - A General Framework for Learning from Weak Supervision [93.89870459388185]
This paper introduces a general framework for learning from weak supervision (GLWS) with a novel algorithm.
Central to GLWS is an Expectation-Maximization (EM) formulation, adeptly accommodating various weak supervision sources.
We also present an advanced algorithm that significantly simplifies the EM computational demands.
arXiv Detail & Related papers (2024-02-02T21:48:50Z) - Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach [31.82185019324094]
Mean Field Games (MFGs) can be extended to Graphon MFGs (GMFGs) to include network structures between agents.
We introduce the novel concept of Graphex MFGs which builds on the graph theoretical concept of graphexes.
This hybrid graphex learning approach leverages that the system mainly consists of a highly connected core and a sparse periphery.
arXiv Detail & Related papers (2024-01-23T11:52:00Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Gradient Gating for Deep Multi-Rate Learning on Graphs [62.25886489571097]
We present Gradient Gating (G$2$), a novel framework for improving the performance of Graph Neural Networks (GNNs)
Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph.
arXiv Detail & Related papers (2022-10-02T13:19:48Z) - Mean Field Games on Weighted and Directed Graphs via Colored Digraphons [26.405495663998828]
Graphon mean field games (GMFGs) provide a scalable and mathematically well-founded approach to learning problems.
Our paper introduces colored digraphon mean field games (CDMFGs) which allow for weighted and directed links between agents.
arXiv Detail & Related papers (2022-09-08T15:45:20Z) - Learning Graphon Mean Field Games and Approximate Nash Equilibria [33.77849245250632]
We propose a novel discrete-time formulation for graphon mean field games with weak interaction.
On the theoretical side, we give extensive and rigorous existence and approximation properties of the graphon mean field solution.
We successfully obtain plausible approximate Nash equilibria in otherwise infeasible large dense graph games with many agents.
arXiv Detail & Related papers (2021-11-29T16:16:11Z) - Simulating Continuum Mechanics with Multi-Scale Graph Neural Networks [0.17205106391379021]
We introduce MultiScaleGNN, a network multi-scale neural graph model for learning to infer unsteady mechanics.
We show that the proposed model can generalise from uniform advection fields to high-gradient fields on complex domains at test time and infer long-term Navier-Stokes solutions within a range of Reynolds numbers.
arXiv Detail & Related papers (2021-06-09T08:37:38Z) - CogDL: A Comprehensive Library for Graph Deep Learning [55.694091294633054]
We present CogDL, a library for graph deep learning that allows researchers and practitioners to conduct experiments, compare methods, and build applications with ease and efficiency.
In CogDL, we propose a unified design for the training and evaluation of GNN models for various graph tasks, making it unique among existing graph learning libraries.
We develop efficient sparse operators for CogDL, enabling it to become the most competitive graph library for efficiency.
arXiv Detail & Related papers (2021-03-01T12:35:16Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.