A Visual Communication Map for Multi-Agent Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2002.11882v2
- Date: Tue, 23 Feb 2021 12:12:47 GMT
- Title: A Visual Communication Map for Multi-Agent Deep Reinforcement Learning
- Authors: Ngoc Duy Nguyen, Thanh Thi Nguyen, Doug Creighton, Saeid Nahavandi
- Abstract summary: Multi-agent learning poses significant challenges in the effort to allocate a concealed communication medium.
Recent studies typically combine a specialized neural network with reinforcement learning to enable communication between agents.
This paper proposes a more scalable approach that not only deals with a great number of agents but also enables collaboration between dissimilar functional agents.
- Score: 7.003240657279981
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep reinforcement learning has been applied successfully to solve various
real-world problems and the number of its applications in the multi-agent
settings has been increasing. Multi-agent learning distinctly poses significant
challenges in the effort to allocate a concealed communication medium. Agents
receive thorough knowledge from the medium to determine subsequent actions in a
distributed nature. Apparently, the goal is to leverage the cooperation of
multiple agents to achieve a designated objective efficiently. Recent studies
typically combine a specialized neural network with reinforcement learning to
enable communication between agents. This approach, however, limits the number
of agents or necessitates the homogeneity of the system. In this paper, we have
proposed a more scalable approach that not only deals with a great number of
agents but also enables collaboration between dissimilar functional agents and
compatibly combined with any deep reinforcement learning methods. Specifically,
we create a global communication map to represent the status of each agent in
the system visually. The visual map and the environmental state are fed to a
shared-parameter network to train multiple agents concurrently. Finally, we
select the Asynchronous Advantage Actor-Critic (A3C) algorithm to demonstrate
our proposed scheme, namely Visual communication map for Multi-agent A3C
(VMA3C). Simulation results show that the use of visual communication map
improves the performance of A3C regarding learning speed, reward achievement,
and robustness in multi-agent problems.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Multi-agent Deep Covering Skill Discovery [50.812414209206054]
We propose Multi-agent Deep Covering Option Discovery, which constructs the multi-agent options through minimizing the expected cover time of the multiple agents' joint state space.
Also, we propose a novel framework to adopt the multi-agent options in the MARL process.
We show that the proposed algorithm can effectively capture the agent interactions with the attention mechanism, successfully identify multi-agent options, and significantly outperforms prior works using single-agent options or no options.
arXiv Detail & Related papers (2022-10-07T00:40:59Z) - Meta-CPR: Generalize to Unseen Large Number of Agents with Communication
Pattern Recognition Module [29.75594940509839]
We formulate a multi-agent environment with a different number of agents as a multi-tasking problem.
We propose a meta reinforcement learning (meta-RL) framework to tackle this problem.
The proposed framework employs a meta-learned Communication Pattern Recognition (CPR) module to identify communication behavior.
arXiv Detail & Related papers (2021-12-14T08:23:04Z) - Multi-Agent Embodied Visual Semantic Navigation with Scene Prior
Knowledge [42.37872230561632]
In visual semantic navigation, the robot navigates to a target object with egocentric visual observations and the class label of the target is given.
Most of the existing models are only effective for single-agent navigation, and a single agent has low efficiency and poor fault tolerance when completing more complicated tasks.
We propose the multi-agent visual semantic navigation, in which multiple agents collaborate with others to find multiple target objects.
arXiv Detail & Related papers (2021-09-20T13:31:03Z) - Collaborative Visual Navigation [69.20264563368762]
We propose a large-scale 3D dataset, CollaVN, for multi-agent visual navigation (MAVN)
Diverse MAVN variants are explored to make our problem more general.
A memory-augmented communication framework is proposed. Each agent is equipped with a private, external memory to persistently store communication information.
arXiv Detail & Related papers (2021-07-02T15:48:16Z) - Learning to Coordinate via Multiple Graph Neural Networks [16.226702761758595]
MGAN is a new algorithm that combines graph convolutional networks and value-decomposition methods.
We show the amazing ability of the graph network in representation learning by visualizing the output of the graph network.
arXiv Detail & Related papers (2021-04-08T04:33:00Z) - The Emergence of Adversarial Communication in Multi-Agent Reinforcement
Learning [6.18778092044887]
Many real-world problems require the coordination of multiple autonomous agents.
Recent work has shown the promise of Graph Neural Networks (GNNs) to learn explicit communication strategies that enable complex multi-agent coordination.
We show how a single self-interested agent is capable of learning highly manipulative communication strategies that allows it to significantly outperform a cooperative team of agents.
arXiv Detail & Related papers (2020-08-06T12:48:08Z) - Counterfactual Multi-Agent Reinforcement Learning with Graph Convolution
Communication [5.5438676149999075]
We consider a fully cooperative multi-agent system where agents cooperate to maximize a system's utility.
We propose that multi-agent systems must have the ability to communicate and understand the inter-plays between agents.
We develop an architecture that allows for communication among agents and tailors the system's reward for each individual agent.
arXiv Detail & Related papers (2020-04-01T14:36:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.