Learning Multi-Agent Communication with Contrastive Learning
- URL: http://arxiv.org/abs/2307.01403v3
- Date: Thu, 1 Feb 2024 23:12:41 GMT
- Title: Learning Multi-Agent Communication with Contrastive Learning
- Authors: Yat Long Lo, Biswa Sengupta, Jakob Foerster, Michael Noukhovitch
- Abstract summary: We introduce an alternative perspective where communicative messages are considered as different incomplete views of the environment state.
By examining the relationship between messages sent and received, we propose to learn to communicate using contrastive learning.
In communication-essential environments, our method outperforms previous work in both performance and learning speed.
- Score: 3.816854668079928
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Communication is a powerful tool for coordination in multi-agent RL. But
inducing an effective, common language is a difficult challenge, particularly
in the decentralized setting. In this work, we introduce an alternative
perspective where communicative messages sent between agents are considered as
different incomplete views of the environment state. By examining the
relationship between messages sent and received, we propose to learn to
communicate using contrastive learning to maximize the mutual information
between messages of a given trajectory. In communication-essential
environments, our method outperforms previous work in both performance and
learning speed. Using qualitative metrics and representation probing, we show
that our method induces more symmetric communication and captures global state
information from the environment. Overall, we show the power of contrastive
learning and the importance of leveraging messages as encodings for effective
communication.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Cognitive Semantic Communication Systems Driven by Knowledge Graph:
Principle, Implementation, and Performance Evaluation [74.38561925376996]
Two cognitive semantic communication frameworks are proposed for the single-user and multiple-user communication scenarios.
An effective semantic correction algorithm is proposed by mining the inference rule from the knowledge graph.
For the multi-user cognitive semantic communication system, a message recovery algorithm is proposed to distinguish messages of different users.
arXiv Detail & Related papers (2023-03-15T12:01:43Z) - On the Role of Emergent Communication for Social Learning in Multi-Agent
Reinforcement Learning [0.0]
Social learning uses cues from experts to align heterogeneous policies, reduce sample complexity, and solve partially observable tasks.
This paper proposes an unsupervised method based on the information bottleneck to capture both referential complexity and task-specific utility.
arXiv Detail & Related papers (2023-02-28T03:23:27Z) - Emergent Quantized Communication [34.31732248872158]
We propose an alternative approach to achieve discrete communication -- quantization of communicated messages.
Message quantization allows us to train the model end-to-end, achieving superior performance in multiple setups.
arXiv Detail & Related papers (2022-11-04T12:39:45Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Beyond Transmitting Bits: Context, Semantics, and Task-Oriented
Communications [88.68461721069433]
Next generation systems can be potentially enriched by folding message semantics and goals of communication into their design.
This tutorial summarizes the efforts to date, starting from its early adaptations, semantic-aware and task-oriented communications.
The focus is on approaches that utilize information theory to provide the foundations, as well as the significant role of learning in semantics and task-aware communications.
arXiv Detail & Related papers (2022-07-19T16:00:57Z) - Towards Human-Agent Communication via the Information Bottleneck
Principle [19.121541894577298]
We study how trading off these three factors -- utility, informativeness, and complexity -- shapes emergent communication.
We propose Vector-Quantized Variational Information Bottleneck (VQ-VIB), a method for training neural agents to compress inputs into discrete signals embedded in a continuous space.
arXiv Detail & Related papers (2022-06-30T20:10:20Z) - Learning to Ground Decentralized Multi-Agent Communication with
Contrastive Learning [1.116812194101501]
We introduce an alternative perspective to the communicative messages sent between agents, considering them as different incomplete views of the environment state.
We propose a simple approach to induce the emergence of a common language by maximizing the mutual information between messages of a given trajectory in a self-supervised manner.
arXiv Detail & Related papers (2022-03-07T12:41:32Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Learning Emergent Discrete Message Communication for Cooperative
Reinforcement Learning [36.468498804251574]
We show that discrete message communication has performance comparable to continuous message communication.
We propose an approach that allows humans to interactively send discrete messages to agents.
arXiv Detail & Related papers (2021-02-24T20:44:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.