Learning Mean Field Control on Sparse Graphs
- URL: http://arxiv.org/abs/2501.17079v1
- Date: Tue, 28 Jan 2025 17:03:30 GMT
- Title: Learning Mean Field Control on Sparse Graphs
- Authors: Christian Fabian, Kai Cui, Heinz Koeppl,
- Abstract summary: We propose a novel mean field control model inspired by local weak convergence to include sparse graphs with coefficients above two.
Besides a theoretical analysis, we design scalable learning algorithms which apply to the challenging class of graph sequences with finite first moment.
As it turns out, our approach outperforms existing methods in many examples and on various networks due to the special design aiming at an important, but so far hard to solve class of MARL problems.
- Score: 28.313779052437134
- License:
- Abstract: Large agent networks are abundant in applications and nature and pose difficult challenges in the field of multi-agent reinforcement learning (MARL) due to their computational and theoretical complexity. While graphon mean field games and their extensions provide efficient learning algorithms for dense and moderately sparse agent networks, the case of realistic sparser graphs remains largely unsolved. Thus, we propose a novel mean field control model inspired by local weak convergence to include sparse graphs such as power law networks with coefficients above two. Besides a theoretical analysis, we design scalable learning algorithms which apply to the challenging class of graph sequences with finite first moment. We compare our model and algorithms for various examples on synthetic and real world networks with mean field algorithms based on Lp graphons and graphexes. As it turns out, our approach outperforms existing methods in many examples and on various networks due to the special design aiming at an important, but so far hard to solve class of MARL problems.
Related papers
- Unified Graph Networks (UGN): A Deep Neural Framework for Solving Graph Problems [0.5699788926464752]
We propose a novel framework named emphUnified emphGraph emphNetwork (UGN) to solve graph problems.
UGN is based on graph convolutional neural networks (GCN) and 2-dimensional convolutional neural networks (Conv2D)
arXiv Detail & Related papers (2025-02-11T12:03:18Z) - Scalable and Accurate Graph Reasoning with LLM-based Multi-Agents [27.4884498301785]
We introduce GraphAgent-Reasoner, a fine-tuning-free framework for explicit and precise graph reasoning.
Inspired by distributed graph computation theory, our framework decomposes graph problems into smaller, node-centric tasks that are distributed among multiple agents.
Our framework demonstrates the capability to handle real-world graph reasoning applications such as webpage importance analysis.
arXiv Detail & Related papers (2024-10-07T15:34:14Z) - Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach [31.82185019324094]
Mean Field Games (MFGs) can be extended to Graphon MFGs (GMFGs) to include network structures between agents.
We introduce the novel concept of Graphex MFGs which builds on the graph theoretical concept of graphexes.
This hybrid graphex learning approach leverages that the system mainly consists of a highly connected core and a sparse periphery.
arXiv Detail & Related papers (2024-01-23T11:52:00Z) - Learning Sparse Graphon Mean Field Games [26.405495663998828]
Graphon mean field games (GMFGs) enable the scalable analysis of MARL problems that are otherwise intractable.
Our paper introduces a novel formulation of GMFGs, called LPGMFGs, which leverages the graph theoretical concept of $Lp$ graphons.
This especially includes power law networks which are empirically observed in various application areas and cannot be captured by standard graphons.
arXiv Detail & Related papers (2022-09-08T15:35:42Z) - Automatic Relation-aware Graph Network Proliferation [182.30735195376792]
We propose Automatic Relation-aware Graph Network Proliferation (ARGNP) for efficiently searching GNNs.
These operations can extract hierarchical node/relational information and provide anisotropic guidance for message passing on a graph.
Experiments on six datasets for four graph learning tasks demonstrate that GNNs produced by our method are superior to the current state-of-the-art hand-crafted and search-based GNNs.
arXiv Detail & Related papers (2022-05-31T10:38:04Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - SIGN: Scalable Inception Graph Neural Networks [4.5158585619109495]
We propose a new, efficient and scalable graph deep learning architecture that sidesteps the need for graph sampling.
Our architecture allows using different local graph operators to best suit the task at hand.
We obtain state-of-the-art results on ogbn-papers100M, the largest public graph dataset, with over 110 million nodes and 1.5 billion edges.
arXiv Detail & Related papers (2020-04-23T14:46:10Z) - Efficient Probabilistic Logic Reasoning with Graph Neural Networks [63.099999467118245]
Markov Logic Networks (MLNs) can be used to address many knowledge graph problems.
Inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.
We propose a graph neural network (GNN) variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.
arXiv Detail & Related papers (2020-01-29T23:34:36Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.