Collaborative Information Dissemination with Graph-based Multi-Agent
Reinforcement Learning
- URL: http://arxiv.org/abs/2308.16198v3
- Date: Wed, 21 Feb 2024 07:11:45 GMT
- Title: Collaborative Information Dissemination with Graph-based Multi-Agent
Reinforcement Learning
- Authors: Raffaele Galliera, Kristen Brent Venable, Matteo Bassani, Niranjan
Suri
- Abstract summary: This paper introduces a Multi-Agent Reinforcement Learning (MARL) approach for efficient information dissemination.
We propose a Partially Observable Game (POSG) for information dissemination empowering each agent to decide on message forwarding independently.
Our experimental results show that our trained policies outperform existing methods.
- Score: 2.9904113489777826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Efficient information dissemination is crucial for supporting critical
operations across domains like disaster response, autonomous vehicles, and
sensor networks. This paper introduces a Multi-Agent Reinforcement Learning
(MARL) approach as a significant step forward in achieving more decentralized,
efficient, and collaborative information dissemination. We propose a Partially
Observable Stochastic Game (POSG) formulation for information dissemination
empowering each agent to decide on message forwarding independently, based on
the observation of their one-hop neighborhood. This constitutes a significant
paradigm shift from heuristics currently employed in real-world broadcast
protocols. Our novel approach harnesses Graph Convolutional Reinforcement
Learning and Graph Attention Networks (GATs) with dynamic attention to capture
essential network features. We propose two approaches, L-DyAN and HL-DyAN,
which differ in terms of the information exchanged among agents. Our
experimental results show that our trained policies outperform existing
methods, including the state-of-the-art heuristic, in terms of network coverage
as well as communication overhead on dynamic networks of varying density and
behavior.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Distributed Autonomous Swarm Formation for Dynamic Network Bridging [40.27919181139919]
We formulate the problem of dynamic network bridging in a novel Decentralized Partially Observable Markov Decision Process (Dec-POMDP)
We propose a Multi-Agent Reinforcement Learning (MARL) approach for the problem based on Graph Convolutional Reinforcement Learning (DGN)
The proposed method is evaluated in a simulated environment and compared to a centralized baseline showing promising results.
arXiv Detail & Related papers (2024-04-02T01:45:03Z) - Learning Decentralized Traffic Signal Controllers with Multi-Agent Graph
Reinforcement Learning [42.175067773481416]
We design a new decentralized control architecture with improved environmental observability to capture the spatial-temporal correlation.
Specifically, we first develop a topology-aware information aggregation strategy to extract correlation-related information from unstructured data gathered in the road network.
A diffusion convolution module is developed, forming a new MARL algorithm, which endows agents with the capabilities of graph learning.
arXiv Detail & Related papers (2023-11-07T06:43:15Z) - Decentralized Learning over Wireless Networks: The Effect of Broadcast
with Random Access [56.91063444859008]
We investigate the impact of broadcast transmission and probabilistic random access policy on the convergence performance of D-SGD.
Our results demonstrate that optimizing the access probability to maximize the expected number of successful links is a highly effective strategy for accelerating the system convergence.
arXiv Detail & Related papers (2023-05-12T10:32:26Z) - Attention Based Feature Fusion For Multi-Agent Collaborative Perception [4.120288148198388]
We propose an intermediate collaborative perception solution in the form of a graph attention network (GAT)
The proposed approach develops an attention-based aggregation strategy to fuse intermediate representations exchanged among multiple connected agents.
This approach adaptively highlights important regions in the intermediate feature maps at both the channel and spatial levels, resulting in improved object detection precision.
arXiv Detail & Related papers (2023-05-03T12:06:11Z) - Soft Hierarchical Graph Recurrent Networks for Many-Agent Partially
Observable Environments [9.067091068256747]
We propose a novel network structure called hierarchical graph recurrent network(HGRN) for multi-agent cooperation under partial observability.
Based on the above technologies, we proposed a value-based MADRL algorithm called Soft-HGRN and its actor-critic variant named SAC-HRGN.
arXiv Detail & Related papers (2021-09-05T09:51:25Z) - Learning Connectivity for Data Distribution in Robot Teams [96.39864514115136]
We propose a task-agnostic, decentralized, low-latency method for data distribution in ad-hoc networks using Graph Neural Networks (GNN)
Our approach enables multi-agent algorithms based on global state information to function by ensuring it is available at each robot.
We train the distributed GNN communication policies via reinforcement learning using the average Age of Information as the reward function and show that it improves training stability compared to task-specific reward functions.
arXiv Detail & Related papers (2021-03-08T21:48:55Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - Learning Multi-Agent Coordination through Connectivity-driven
Communication [7.462336024223669]
In artificial multi-agent systems, the ability to learn collaborative policies is predicated upon the agents' communication skills.
We present a deep reinforcement learning approach, Connectivity Driven Communication (CDC)
CDC is able to learn effective collaborative policies and can over-perform competing learning algorithms on cooperative navigation tasks.
arXiv Detail & Related papers (2020-02-12T20:58:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.