Emergent Communication: Generalization and Overfitting in Lewis Games
- URL: http://arxiv.org/abs/2209.15342v1
- Date: Fri, 30 Sep 2022 09:50:46 GMT
- Title: Emergent Communication: Generalization and Overfitting in Lewis Games
- Authors: Mathieu Rita, Corentin Tallec, Paul Michel, Jean-Bastien Grill,
Olivier Pietquin, Emmanuel Dupoux, Florian Strub
- Abstract summary: Lewis signaling games are a class of simple communication games for simulating the emergence of language.
In these games, two agents must agree on a communication protocol in order to solve a cooperative task.
Previous work has shown that agents trained to play this game with reinforcement learning tend to develop languages that display undesirable properties.
- Score: 53.35045559317384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lewis signaling games are a class of simple communication games for
simulating the emergence of language. In these games, two agents must agree on
a communication protocol in order to solve a cooperative task. Previous work
has shown that agents trained to play this game with reinforcement learning
tend to develop languages that display undesirable properties from a linguistic
point of view (lack of generalization, lack of compositionality, etc). In this
paper, we aim to provide better understanding of this phenomenon by
analytically studying the learning problem in Lewis games. As a core
contribution, we demonstrate that the standard objective in Lewis games can be
decomposed in two components: a co-adaptation loss and an information loss.
This decomposition enables us to surface two potential sources of overfitting,
which we show may undermine the emergence of a structured communication
protocol. In particular, when we control for overfitting on the co-adaptation
loss, we recover desired properties in the emergent languages: they are more
compositional and generalize better.
Related papers
- On the Correspondence between Compositionality and Imitation in Emergent
Neural Communication [1.4610038284393165]
Our work explores the link between compositionality and imitation in a Lewis game played by deep neural agents.
supervised learning tends to produce more average languages, while reinforcement learning introduces a selection pressure toward more compositional languages.
arXiv Detail & Related papers (2023-05-22T11:41:29Z) - EC^2: Emergent Communication for Embodied Control [72.99894347257268]
Embodied control requires agents to leverage multi-modal pre-training to quickly learn how to act in new environments.
We propose Emergent Communication for Embodied Control (EC2), a novel scheme to pre-train video-language representations for few-shot embodied control.
EC2 is shown to consistently outperform previous contrastive learning methods for both videos and texts as task inputs.
arXiv Detail & Related papers (2023-04-19T06:36:02Z) - Reasoning about Causality in Games [63.930126666879396]
Causal reasoning and game-theoretic reasoning are fundamental topics in artificial intelligence.
We introduce mechanised games, which encode dependencies between agents' decision rules and the distributions governing the game.
We describe correspondences between causal games and other formalisms, and explain how causal games can be used to answer queries that other causal or game-theoretic models do not support.
arXiv Detail & Related papers (2023-01-05T22:47:28Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Implicit Communication as Minimum Entropy Coupling [42.13333133772116]
In many common-payoff games, achieving good performance requires players to develop protocols for communicating their private information implicitly.
We identify a class of partially observable common-payoff games, which we call implicit referential games, whose difficulty can be attributed to implicit communication.
We show that this method can discover performant implicit communication protocols in settings with very large spaces of messages.
arXiv Detail & Related papers (2021-07-17T17:44:30Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Emergent Communication of Generalizations [13.14792537601313]
We argue that communicating about a single object in a shared visual context is prone to overfitting and does not encourage language useful beyond concrete reference.
We propose games that require communicating generalizations over sets of objects representing abstract visual concepts.
We find that these games greatly improve systematicity and interpretability of the learned languages.
arXiv Detail & Related papers (2021-06-04T19:02:18Z) - Incorporating Pragmatic Reasoning Communication into Emergent Language [38.134221799334426]
We study the dynamics of linguistic communication along substantially different intelligence and intelligence levels.
We propose computational models that combine short-term mutual reasoning-based pragmatics with long-term language emergentism.
Our results shed light on their importance for making inroads towards getting more natural, accurate, robust, fine-grained, and succinct utterances.
arXiv Detail & Related papers (2020-06-07T10:31:06Z) - Towards Graph Representation Learning in Emergent Communication [37.8523331078468]
We use graph convolutional networks to support the evolution of language and cooperation in multi-agent systems.
Motivated by an image-based referential game, we propose a graph referential game with varying degrees of complexity.
We show that the emerged communication protocol is robust, that the agents uncover the true factors of variation in the game, and that they learn to generalize beyond the samples encountered during training.
arXiv Detail & Related papers (2020-01-24T15:55:59Z) - Emergence of Pragmatics from Referential Game between Theory of Mind
Agents [64.25696237463397]
We propose an algorithm, using which agents can spontaneously learn the ability to "read between lines" without any explicit hand-designed rules.
We integrate the theory of mind (ToM) in a cooperative multi-agent pedagogical situation and propose an adaptive reinforcement learning (RL) algorithm to develop a communication protocol.
arXiv Detail & Related papers (2020-01-21T19:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.