Learning to Ground Decentralized Multi-Agent Communication with
Contrastive Learning
- URL: http://arxiv.org/abs/2203.03344v1
- Date: Mon, 7 Mar 2022 12:41:32 GMT
- Title: Learning to Ground Decentralized Multi-Agent Communication with
Contrastive Learning
- Authors: Yat Long Lo and Biswa Sengupta
- Abstract summary: We introduce an alternative perspective to the communicative messages sent between agents, considering them as different incomplete views of the environment state.
We propose a simple approach to induce the emergence of a common language by maximizing the mutual information between messages of a given trajectory in a self-supervised manner.
- Score: 1.116812194101501
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For communication to happen successfully, a common language is required
between agents to understand information communicated by one another. Inducing
the emergence of a common language has been a difficult challenge to
multi-agent learning systems. In this work, we introduce an alternative
perspective to the communicative messages sent between agents, considering them
as different incomplete views of the environment state. Based on this
perspective, we propose a simple approach to induce the emergence of a common
language by maximizing the mutual information between messages of a given
trajectory in a self-supervised manner. By evaluating our method in
communication-essential environments, we empirically show how our method leads
to better learning performance and speed, and learns a more consistent common
language than existing methods, without introducing additional learning
parameters.
Related papers
- Learning Multi-Agent Communication with Contrastive Learning [3.816854668079928]
We introduce an alternative perspective where communicative messages are considered as different incomplete views of the environment state.
By examining the relationship between messages sent and received, we propose to learn to communicate using contrastive learning.
In communication-essential environments, our method outperforms previous work in both performance and learning speed.
arXiv Detail & Related papers (2023-07-03T23:51:05Z) - Commonsense Knowledge Transfer for Pre-trained Language Models [83.01121484432801]
We introduce commonsense knowledge transfer, a framework to transfer the commonsense knowledge stored in a neural commonsense knowledge model to a general-purpose pre-trained language model.
It first exploits general texts to form queries for extracting commonsense knowledge from the neural commonsense knowledge model.
It then refines the language model with two self-supervised objectives: commonsense mask infilling and commonsense relation prediction.
arXiv Detail & Related papers (2023-06-04T15:44:51Z) - Cognitive Semantic Communication Systems Driven by Knowledge Graph:
Principle, Implementation, and Performance Evaluation [74.38561925376996]
Two cognitive semantic communication frameworks are proposed for the single-user and multiple-user communication scenarios.
An effective semantic correction algorithm is proposed by mining the inference rule from the knowledge graph.
For the multi-user cognitive semantic communication system, a message recovery algorithm is proposed to distinguish messages of different users.
arXiv Detail & Related papers (2023-03-15T12:01:43Z) - On the Role of Emergent Communication for Social Learning in Multi-Agent
Reinforcement Learning [0.0]
Social learning uses cues from experts to align heterogeneous policies, reduce sample complexity, and solve partially observable tasks.
This paper proposes an unsupervised method based on the information bottleneck to capture both referential complexity and task-specific utility.
arXiv Detail & Related papers (2023-02-28T03:23:27Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Learning to Ground Multi-Agent Communication with Autoencoders [43.22048280036316]
Communication requires a common language, a lingua franca, between agents.
We demonstrate a simple way to ground language in learned representations.
We find that a standard representation learning algorithm is sufficient for arriving at a grounded common language.
arXiv Detail & Related papers (2021-10-28T17:57:26Z) - Learning Emergent Discrete Message Communication for Cooperative
Reinforcement Learning [36.468498804251574]
We show that discrete message communication has performance comparable to continuous message communication.
We propose an approach that allows humans to interactively send discrete messages to agents.
arXiv Detail & Related papers (2021-02-24T20:44:14Z) - Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent
Populations [59.608216900601384]
We study agents that learn to communicate via actuating their joints in a 3D environment.
We show that under realistic assumptions, a non-uniform distribution of intents and a common-knowledge energy cost, these agents can find protocols that generalize to novel partners.
arXiv Detail & Related papers (2020-10-29T19:23:10Z) - Multi-agent Communication meets Natural Language: Synergies between
Functional and Structural Language Learning [16.776753238108036]
We present a method for combining multi-agent communication and traditional data-driven approaches to natural language learning.
Our starting point is a language model that has been trained on generic, not task-specific language data.
We then place this model in a multi-agent self-play environment that generates task-specific rewards used to adapt or modulate the model.
arXiv Detail & Related papers (2020-05-14T15:32:23Z) - Experience Grounds Language [185.73483760454454]
Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates.
Despite the incredible effectiveness of language processing models to tackle tasks after being trained on text alone, successful linguistic communication relies on a shared experience of the world.
arXiv Detail & Related papers (2020-04-21T16:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.