Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach
- URL: http://arxiv.org/abs/2401.12686v2
- Date: Fri, 23 Feb 2024 10:04:14 GMT
- Title: Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach
- Authors: Christian Fabian, Kai Cui, Heinz Koeppl
- Abstract summary: Mean Field Games (MFGs) can be extended to Graphon MFGs (GMFGs) to include network structures between agents.
We introduce the novel concept of Graphex MFGs which builds on the graph theoretical concept of graphexes.
This hybrid graphex learning approach leverages that the system mainly consists of a highly connected core and a sparse periphery.
- Score: 31.82185019324094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning the behavior of large agent populations is an important task for
numerous research areas. Although the field of multi-agent reinforcement
learning (MARL) has made significant progress towards solving these systems,
solutions for many agents often remain computationally infeasible and lack
theoretical guarantees. Mean Field Games (MFGs) address both of these issues
and can be extended to Graphon MFGs (GMFGs) to include network structures
between agents. Despite their merits, the real world applicability of GMFGs is
limited by the fact that graphons only capture dense graphs. Since most
empirically observed networks show some degree of sparsity, such as power law
graphs, the GMFG framework is insufficient for capturing these network
topologies. Thus, we introduce the novel concept of Graphex MFGs (GXMFGs) which
builds on the graph theoretical concept of graphexes. Graphexes are the
limiting objects to sparse graph sequences that also have other desirable
features such as the small world property. Learning equilibria in these games
is challenging due to the rich and sparse structure of the underlying graphs.
To tackle these challenges, we design a new learning algorithm tailored to the
GXMFG setup. This hybrid graphex learning approach leverages that the system
mainly consists of a highly connected core and a sparse periphery. After
defining the system and providing a theoretical analysis, we state our learning
approach and demonstrate its learning capabilities on both synthetic graphs and
real-world networks. This comparison shows that our GXMFG learning algorithm
successfully extends MFGs to a highly relevant class of hard, realistic
learning problems that are not accurately addressed by current MARL and MFG
methods.
Related papers
- Learning Mean Field Control on Sparse Graphs [28.313779052437134]
We propose a novel mean field control model inspired by local weak convergence to include sparse graphs with coefficients above two.
Besides a theoretical analysis, we design scalable learning algorithms which apply to the challenging class of graph sequences with finite first moment.
As it turns out, our approach outperforms existing methods in many examples and on various networks due to the special design aiming at an important, but so far hard to solve class of MARL problems.
arXiv Detail & Related papers (2025-01-28T17:03:30Z) - Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements [54.006506479865344]
We propose a unified evaluation framework for graph-level Graph Neural Networks (GNNs)
This framework provides a standardized setting to evaluate GNNs across diverse datasets.
We also propose a novel GNN model with enhanced expressivity and generalization capabilities.
arXiv Detail & Related papers (2025-01-01T08:48:53Z) - Multi-view Fuzzy Graph Attention Networks for Enhanced Graph Learning [0.0]
Fuzzy Graph Attention Network (FGAT) has shown promise in tasks requiring robust graph-based learning.
We propose the Multi-view Fuzzy Graph Attention Network (MFGAT), a novel framework that constructs and aggregates multi-view information.
arXiv Detail & Related papers (2024-12-23T04:39:08Z) - MGNet: Learning Correspondences via Multiple Graphs [78.0117352211091]
Learning correspondences aims to find correct correspondences from the initial correspondence set with an uneven correspondence distribution and a low inlier rate.
Recent advances usually use graph neural networks (GNNs) to build a single type of graph or stack local graphs into the global one to complete the task.
We propose MGNet to effectively combine multiple complementary graphs.
arXiv Detail & Related papers (2024-01-10T07:58:44Z) - Graph Transformers for Large Graphs [57.19338459218758]
This work advances representation learning on single large-scale graphs with a focus on identifying model characteristics and critical design constraints.
A key innovation of this work lies in the creation of a fast neighborhood sampling technique coupled with a local attention mechanism.
We report a 3x speedup and 16.8% performance gain on ogbn-products and snap-patents, while we also scale LargeGT on ogbn-100M with a 5.9% performance improvement.
arXiv Detail & Related papers (2023-12-18T11:19:23Z) - GraphGPT: Graph Instruction Tuning for Large Language Models [27.036935149004726]
Graph Neural Networks (GNNs) have evolved to understand graph structures.
To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation.
Our research tackles this by advancing graph model generalization in zero-shot learning environments.
arXiv Detail & Related papers (2023-10-19T06:17:46Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Learning Sparse Graphon Mean Field Games [26.405495663998828]
Graphon mean field games (GMFGs) enable the scalable analysis of MARL problems that are otherwise intractable.
Our paper introduces a novel formulation of GMFGs, called LPGMFGs, which leverages the graph theoretical concept of $Lp$ graphons.
This especially includes power law networks which are empirically observed in various application areas and cannot be captured by standard graphons.
arXiv Detail & Related papers (2022-09-08T15:35:42Z) - Federated Graph Machine Learning: A Survey of Concepts, Techniques, and
Applications [26.13397777812025]
Federated Graph Machine Learning (FGML) is a promising solution to tackle this challenge.
We conduct a comprehensive review of the literature in FGML.
We summarize the real-world applications of FGML from different domains.
arXiv Detail & Related papers (2022-07-24T20:46:23Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.