Networked Multi-Agent Reinforcement Learning with Emergent Communication
- URL: http://arxiv.org/abs/2004.02780v2
- Date: Thu, 9 Apr 2020 04:14:14 GMT
- Title: Networked Multi-Agent Reinforcement Learning with Emergent Communication
- Authors: Shubham Gupta, Rishi Hazra, Ambedkar Dukkipati
- Abstract summary: Multi-Agent Reinforcement Learning (MARL) methods find optimal policies for agents that operate in the presence of other learning agents.
One way to coordinate is by learning to communicate with each other.
Can the agents develop a language while learning to perform a common task?
- Score: 18.47483427884452
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-Agent Reinforcement Learning (MARL) methods find optimal policies for
agents that operate in the presence of other learning agents. Central to
achieving this is how the agents coordinate. One way to coordinate is by
learning to communicate with each other. Can the agents develop a language
while learning to perform a common task? In this paper, we formulate and study
a MARL problem where cooperative agents are connected to each other via a fixed
underlying network. These agents can communicate along the edges of this
network by exchanging discrete symbols. However, the semantics of these symbols
are not predefined and, during training, the agents are required to develop a
language that helps them in accomplishing their goals. We propose a method for
training these agents using emergent communication. We demonstrate the
applicability of the proposed framework by applying it to the problem of
managing traffic controllers, where we achieve state-of-the-art performance as
compared to a number of strong baselines. More importantly, we perform a
detailed analysis of the emergent communication to show, for instance, that the
developed language is grounded and demonstrate its relationship with the
underlying network topology. To the best of our knowledge, this is the only
work that performs an in depth analysis of emergent communication in a
networked MARL setting while being applicable to a broad class of problems.
Related papers
- Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Verco: Learning Coordinated Verbal Communication for Multi-agent Reinforcement Learning [42.27106057372819]
We propose a novel multi-agent reinforcement learning algorithm that embeds large language models into agents.
The framework has a message module and an action module.
Experiments conducted on the Overcooked game demonstrate our method significantly enhances the learning efficiency and performance of existing methods.
arXiv Detail & Related papers (2024-04-27T05:10:33Z) - Fully Independent Communication in Multi-Agent Reinforcement Learning [4.470370168359807]
Multi-Agent Reinforcement Learning (MARL) comprises a broad area of research within the field of multi-agent systems.
We investigate how independent learners in MARL that do not share parameters can communicate.
Our results show that, despite the challenges, independent agents can still learn communication strategies following our method.
arXiv Detail & Related papers (2024-01-26T18:42:01Z) - Multi-Agent Reinforcement Learning Based on Representational
Communication for Large-Scale Traffic Signal Control [13.844458247041711]
Traffic signal control (TSC) is a challenging problem within intelligent transportation systems.
We propose a communication-based MARL framework for large-scale TSC.
Our framework allows each agent to learn a communication policy that dictates "which" part of the message is sent "to whom"
arXiv Detail & Related papers (2023-10-03T21:06:51Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent
Populations [59.608216900601384]
We study agents that learn to communicate via actuating their joints in a 3D environment.
We show that under realistic assumptions, a non-uniform distribution of intents and a common-knowledge energy cost, these agents can find protocols that generalize to novel partners.
arXiv Detail & Related papers (2020-10-29T19:23:10Z) - The Emergence of Adversarial Communication in Multi-Agent Reinforcement
Learning [6.18778092044887]
Many real-world problems require the coordination of multiple autonomous agents.
Recent work has shown the promise of Graph Neural Networks (GNNs) to learn explicit communication strategies that enable complex multi-agent coordination.
We show how a single self-interested agent is capable of learning highly manipulative communication strategies that allows it to significantly outperform a cooperative team of agents.
arXiv Detail & Related papers (2020-08-06T12:48:08Z) - Learning Individually Inferred Communication for Multi-Agent Cooperation [37.56115000150748]
We propose Individually Inferred Communication (I2C) to enable agents to learn a prior for agent-agent communication.
The prior knowledge is learned via causal inference and realized by a feed-forward neural network.
I2C can not only reduce communication overhead but also improve the performance in a variety of multi-agent cooperative scenarios.
arXiv Detail & Related papers (2020-06-11T14:07:57Z) - Learning to cooperate: Emergent communication in multi-agent navigation [49.11609702016523]
We show that agents performing a cooperative navigation task learn an interpretable communication protocol.
An analysis of the agents' policies reveals that emergent signals spatially cluster the state space.
Using populations of agents, we show that the emergent protocol has basic compositional structure.
arXiv Detail & Related papers (2020-04-02T16:03:17Z) - Learning Structured Communication for Multi-agent Reinforcement Learning [104.64584573546524]
This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting.
We propose a novel framework termed as Learning Structured Communication (LSC) by using a more flexible and efficient communication topology.
arXiv Detail & Related papers (2020-02-11T07:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.