Scaling Up Multiagent Reinforcement Learning for Robotic Systems: Learn
an Adaptive Sparse Communication Graph
- URL: http://arxiv.org/abs/2003.01040v2
- Date: Tue, 3 Mar 2020 21:53:06 GMT
- Title: Scaling Up Multiagent Reinforcement Learning for Robotic Systems: Learn
an Adaptive Sparse Communication Graph
- Authors: Chuangchuang Sun, Macheng Shen, and Jonathan P. How
- Abstract summary: The complexity of multiagent reinforcement learning increases exponentially with respect to the agent number.
One critical feature in MARL that is often neglected is that the interactions between agents are quite sparse.
We propose an adaptive sparse attention mechanism by generalizing a sparsity-inducing activation function.
We show that our algorithm can learn an interpretable sparse structure and outperforms previous works by a significant margin on applications involving a large-scale multiagent system.
- Score: 39.48317026356428
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The complexity of multiagent reinforcement learning (MARL) in multiagent
systems increases exponentially with respect to the agent number. This
scalability issue prevents MARL from being applied in large-scale multiagent
systems. However, one critical feature in MARL that is often neglected is that
the interactions between agents are quite sparse. Without exploiting this
sparsity structure, existing works aggregate information from all of the agents
and thus have a high sample complexity. To address this issue, we propose an
adaptive sparse attention mechanism by generalizing a sparsity-inducing
activation function. Then a sparse communication graph in MARL is learned by
graph neural networks based on this new attention mechanism. Through this
sparsity structure, the agents can communicate in an effective as well as
efficient way via only selectively attending to agents that matter the most and
thus the scale of the MARL problem is reduced with little optimality
compromised. Comparative results show that our algorithm can learn an
interpretable sparse structure and outperforms previous works by a significant
margin on applications involving a large-scale multiagent system.
Related papers
- Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - MASP: Scalable GNN-based Planning for Multi-Agent Navigation [17.788592987873905]
We propose a goal-conditioned hierarchical planner for navigation tasks with a substantial number of agents.
We also leverage graph neural networks (GNN) to model the interaction between agents and goals, improving goal achievement.
The results demonstrate that MASP outperforms classical planning-based competitors and RL baselines.
arXiv Detail & Related papers (2023-12-05T06:05:04Z) - Controlling Large Language Model-based Agents for Large-Scale
Decision-Making: An Actor-Critic Approach [28.477463632107558]
We develop a modular framework called LLaMAC to address hallucination in Large Language Models and coordination in Multi-Agent Systems.
LLaMAC implements a value distribution encoding similar to that found in the human brain, utilizing internal and external feedback mechanisms to facilitate collaboration and iterative reasoning among its modules.
arXiv Detail & Related papers (2023-11-23T10:14:58Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - MADiff: Offline Multi-agent Learning with Diffusion Models [79.18130544233794]
Diffusion model (DM) recently achieved huge success in various scenarios including offline reinforcement learning.
We propose MADiff, a novel generative multi-agent learning framework to tackle this problem.
Our experiments show the superior performance of MADiff compared to baseline algorithms in a wide range of multi-agent learning tasks.
arXiv Detail & Related papers (2023-05-27T02:14:09Z) - Partially Observable Mean Field Multi-Agent Reinforcement Learning Based on Graph-Attention [12.588866091856309]
This paper considers partially observable multi-agent reinforcement learning (MARL), where each agent can only observe other agents within a fixed range.
We propose a novel multi-agent reinforcement learning algorithm, Partially Observable Mean Field Multi-Agent Reinforcement Learning based on Graph-Attention (GAMFQ)
Experiments show that GAMFQ outperforms baselines including the state-of-the-art partially observable mean-field reinforcement learning algorithms.
arXiv Detail & Related papers (2023-04-25T08:38:32Z) - Multi-Agent Reinforcement Learning for Microprocessor Design Space
Exploration [71.95914457415624]
Microprocessor architects are increasingly resorting to domain-specific customization in the quest for high-performance and energy-efficiency.
We propose an alternative formulation that leverages Multi-Agent RL (MARL) to tackle this problem.
Our evaluation shows that the MARL formulation consistently outperforms single-agent RL baselines.
arXiv Detail & Related papers (2022-11-29T17:10:24Z) - PooL: Pheromone-inspired Communication Framework forLarge Scale
Multi-Agent Reinforcement Learning [0.0]
textbfPooL is an indirect communication framework applied to large scale multi-agent reinforcement textbfl.
PooL uses the release and utilization mechanism of pheromones to control large-scale agent coordination.
PooL can capture effective information and achieve higher rewards than other state-of-arts methods with lower communication costs.
arXiv Detail & Related papers (2022-02-20T03:09:53Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z) - A Visual Communication Map for Multi-Agent Deep Reinforcement Learning [7.003240657279981]
Multi-agent learning poses significant challenges in the effort to allocate a concealed communication medium.
Recent studies typically combine a specialized neural network with reinforcement learning to enable communication between agents.
This paper proposes a more scalable approach that not only deals with a great number of agents but also enables collaboration between dissimilar functional agents.
arXiv Detail & Related papers (2020-02-27T02:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.