Implicit Communication as Minimum Entropy Coupling
- URL: http://arxiv.org/abs/2107.08295v1
- Date: Sat, 17 Jul 2021 17:44:30 GMT
- Title: Implicit Communication as Minimum Entropy Coupling
- Authors: Samuel Sokota, Christian Schroeder de Witt, Maximilian Igl, Luisa
Zintgraf, Philip Torr, Shimon Whiteson, Jakob Foerster
- Abstract summary: In many common-payoff games, achieving good performance requires players to develop protocols for communicating their private information implicitly.
We identify a class of partially observable common-payoff games, which we call implicit referential games, whose difficulty can be attributed to implicit communication.
We show that this method can discover performant implicit communication protocols in settings with very large spaces of messages.
- Score: 42.13333133772116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many common-payoff games, achieving good performance requires players to
develop protocols for communicating their private information implicitly --
i.e., using actions that have non-communicative effects on the environment.
Multi-agent reinforcement learning practitioners typically approach this
problem using independent learning methods in the hope that agents will learn
implicit communication as a byproduct of expected return maximization.
Unfortunately, independent learning methods are incapable of doing this in many
settings. In this work, we isolate the implicit communication problem by
identifying a class of partially observable common-payoff games, which we call
implicit referential games, whose difficulty can be attributed to implicit
communication. Next, we introduce a principled method based on minimum entropy
coupling that leverages the structure of implicit referential games, yielding a
new perspective on implicit communication. Lastly, we show that this method can
discover performant implicit communication protocols in settings with very
large spaces of messages.
Related papers
- Learning Multi-Agent Communication with Contrastive Learning [3.816854668079928]
We introduce an alternative perspective where communicative messages are considered as different incomplete views of the environment state.
By examining the relationship between messages sent and received, we propose to learn to communicate using contrastive learning.
In communication-essential environments, our method outperforms previous work in both performance and learning speed.
arXiv Detail & Related papers (2023-07-03T23:51:05Z) - Cognitive Semantic Communication Systems Driven by Knowledge Graph:
Principle, Implementation, and Performance Evaluation [74.38561925376996]
Two cognitive semantic communication frameworks are proposed for the single-user and multiple-user communication scenarios.
An effective semantic correction algorithm is proposed by mining the inference rule from the knowledge graph.
For the multi-user cognitive semantic communication system, a message recovery algorithm is proposed to distinguish messages of different users.
arXiv Detail & Related papers (2023-03-15T12:01:43Z) - On the Role of Emergent Communication for Social Learning in Multi-Agent
Reinforcement Learning [0.0]
Social learning uses cues from experts to align heterogeneous policies, reduce sample complexity, and solve partially observable tasks.
This paper proposes an unsupervised method based on the information bottleneck to capture both referential complexity and task-specific utility.
arXiv Detail & Related papers (2023-02-28T03:23:27Z) - Emergent Quantized Communication [34.31732248872158]
We propose an alternative approach to achieve discrete communication -- quantization of communicated messages.
Message quantization allows us to train the model end-to-end, achieving superior performance in multiple setups.
arXiv Detail & Related papers (2022-11-04T12:39:45Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Emergent Communication: Generalization and Overfitting in Lewis Games [53.35045559317384]
Lewis signaling games are a class of simple communication games for simulating the emergence of language.
In these games, two agents must agree on a communication protocol in order to solve a cooperative task.
Previous work has shown that agents trained to play this game with reinforcement learning tend to develop languages that display undesirable properties.
arXiv Detail & Related papers (2022-09-30T09:50:46Z) - Learning to Ground Decentralized Multi-Agent Communication with
Contrastive Learning [1.116812194101501]
We introduce an alternative perspective to the communicative messages sent between agents, considering them as different incomplete views of the environment state.
We propose a simple approach to induce the emergence of a common language by maximizing the mutual information between messages of a given trajectory in a self-supervised manner.
arXiv Detail & Related papers (2022-03-07T12:41:32Z) - The Enforcers: Consistent Sparse-Discrete Methods for Constraining
Informative Emergent Communication [5.432350993419402]
Communication enables agents to cooperate to achieve their goals.
Recent work in learning sparse communication suffers from high variance training where, the price of decreasing communication is a decrease in reward, particularly in cooperative tasks.
This research addresses the above issues by limiting the loss in reward of decreasing communication and eliminating the penalty for discretization.
arXiv Detail & Related papers (2022-01-19T07:31:06Z) - Quasi-Equivalence Discovery for Zero-Shot Emergent Communication [63.175848843466845]
We present a novel problem setting and the Quasi-Equivalence Discovery algorithm that allows for zero-shot coordination (ZSC)
We show that these two factors lead to unique optimal ZSC policies in referential games.
QED can iteratively discover the symmetries in this setting and converges to the optimal ZSC policy.
arXiv Detail & Related papers (2021-03-14T23:42:37Z) - Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent
Populations [59.608216900601384]
We study agents that learn to communicate via actuating their joints in a 3D environment.
We show that under realistic assumptions, a non-uniform distribution of intents and a common-knowledge energy cost, these agents can find protocols that generalize to novel partners.
arXiv Detail & Related papers (2020-10-29T19:23:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.