Curriculum-Driven Multi-Agent Learning and the Role of Implicit
Communication in Teamwork
- URL: http://arxiv.org/abs/2106.11156v1
- Date: Mon, 21 Jun 2021 14:54:07 GMT
- Title: Curriculum-Driven Multi-Agent Learning and the Role of Implicit
Communication in Teamwork
- Authors: Niko A. Grupen, Daniel D. Lee, Bart Selman
- Abstract summary: We propose a curriculum-driven learning strategy for solving difficult multi-agent coordination tasks.
We argue that emergent implicit communication plays a large role in enabling superior levels of coordination.
- Score: 24.92668968807012
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a curriculum-driven learning strategy for solving difficult
multi-agent coordination tasks. Our method is inspired by a study of animal
communication, which shows that two straightforward design features (mutual
reward and decentralization) support a vast spectrum of communication protocols
in nature. We highlight the importance of similarly interpreting emergent
communication as a spectrum. We introduce a toroidal, continuous-space
pursuit-evasion environment and show that naive decentralized learning does not
perform well. We then propose a novel curriculum-driven strategy for
multi-agent learning. Experiments with pursuit-evasion show that our approach
enables decentralized pursuers to learn to coordinate and capture a superior
evader, significantly outperforming sophisticated analytical policies. We argue
through additional quantitative analysis -- including influence-based measures
such as Instantaneous Coordination -- that emergent implicit communication
plays a large role in enabling superior levels of coordination.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Generalising Multi-Agent Cooperation through Task-Agnostic Communication [7.380444448047908]
Existing communication methods for multi-agent reinforcement learning (MARL) in cooperative multi-robot problems are almost exclusively task-specific, training new communication strategies for each unique task.
We address this inefficiency by introducing a communication strategy applicable to any task within a given environment.
Our objective is to learn a fixed-size latent Markov state from a variable number of agent observations.
Our method enables seamless adaptation to novel tasks without fine-tuning the communication strategy, gracefully supports scaling to more agents than present during training, and detects out-of-distribution events in an environment.
arXiv Detail & Related papers (2024-03-11T14:20:13Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Learning Multi-Agent Communication with Contrastive Learning [3.816854668079928]
We introduce an alternative perspective where communicative messages are considered as different incomplete views of the environment state.
By examining the relationship between messages sent and received, we propose to learn to communicate using contrastive learning.
In communication-essential environments, our method outperforms previous work in both performance and learning speed.
arXiv Detail & Related papers (2023-07-03T23:51:05Z) - On the Role of Emergent Communication for Social Learning in Multi-Agent
Reinforcement Learning [0.0]
Social learning uses cues from experts to align heterogeneous policies, reduce sample complexity, and solve partially observable tasks.
This paper proposes an unsupervised method based on the information bottleneck to capture both referential complexity and task-specific utility.
arXiv Detail & Related papers (2023-02-28T03:23:27Z) - Learning to Ground Decentralized Multi-Agent Communication with
Contrastive Learning [1.116812194101501]
We introduce an alternative perspective to the communicative messages sent between agents, considering them as different incomplete views of the environment state.
We propose a simple approach to induce the emergence of a common language by maximizing the mutual information between messages of a given trajectory in a self-supervised manner.
arXiv Detail & Related papers (2022-03-07T12:41:32Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Language-guided Navigation via Cross-Modal Grounding and Alternate
Adversarial Learning [66.9937776799536]
The emerging vision-and-language navigation (VLN) problem aims at learning to navigate an agent to the target location in unseen photo-realistic environments.
The main challenges of VLN arise mainly from two aspects: first, the agent needs to attend to the meaningful paragraphs of the language instruction corresponding to the dynamically-varying visual environments.
We propose a cross-modal grounding module to equip the agent with a better ability to track the correspondence between the textual and visual modalities.
arXiv Detail & Related papers (2020-11-22T09:13:46Z) - Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent
Populations [59.608216900601384]
We study agents that learn to communicate via actuating their joints in a 3D environment.
We show that under realistic assumptions, a non-uniform distribution of intents and a common-knowledge energy cost, these agents can find protocols that generalize to novel partners.
arXiv Detail & Related papers (2020-10-29T19:23:10Z) - The Emergence of Adversarial Communication in Multi-Agent Reinforcement
Learning [6.18778092044887]
Many real-world problems require the coordination of multiple autonomous agents.
Recent work has shown the promise of Graph Neural Networks (GNNs) to learn explicit communication strategies that enable complex multi-agent coordination.
We show how a single self-interested agent is capable of learning highly manipulative communication strategies that allows it to significantly outperform a cooperative team of agents.
arXiv Detail & Related papers (2020-08-06T12:48:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.