On the Correspondence between Compositionality and Imitation in Emergent
Neural Communication
- URL: http://arxiv.org/abs/2305.12941v1
- Date: Mon, 22 May 2023 11:41:29 GMT
- Title: On the Correspondence between Compositionality and Imitation in Emergent
Neural Communication
- Authors: Emily Cheng, Mathieu Rita, Thierry Poibeau
- Abstract summary: Our work explores the link between compositionality and imitation in a Lewis game played by deep neural agents.
supervised learning tends to produce more average languages, while reinforcement learning introduces a selection pressure toward more compositional languages.
- Score: 1.4610038284393165
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Compositionality is a hallmark of human language that not only enables
linguistic generalization, but also potentially facilitates acquisition. When
simulating language emergence with neural networks, compositionality has been
shown to improve communication performance; however, its impact on imitation
learning has yet to be investigated. Our work explores the link between
compositionality and imitation in a Lewis game played by deep neural agents.
Our contributions are twofold: first, we show that the learning algorithm used
to imitate is crucial: supervised learning tends to produce more average
languages, while reinforcement learning introduces a selection pressure toward
more compositional languages. Second, our study reveals that compositional
languages are easier to imitate, which may induce the pressure toward
compositional languages in RL imitation settings.
Related papers
- NeLLCom-X: A Comprehensive Neural-Agent Framework to Simulate Language Learning and Group Communication [2.184775414778289]
Recently introduced NeLLCom framework allows agents to first learn an artificial language and then use it to communicate.
We extend this framework by introducing more realistic role-alternating agents and group communication.
arXiv Detail & Related papers (2024-07-19T03:03:21Z) - The Role of Language Imbalance in Cross-lingual Generalisation: Insights from Cloned Language Experiments [57.273662221547056]
In this study, we investigate an unintuitive novel driver of cross-lingual generalisation: language imbalance.
We observe that the existence of a predominant language during training boosts the performance of less frequent languages.
As we extend our analysis to real languages, we find that infrequent languages still benefit from frequent ones, yet whether language imbalance causes cross-lingual generalisation there is not conclusive.
arXiv Detail & Related papers (2024-04-11T17:58:05Z) - Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling [47.7950860342515]
LexiContrastive Grounding (LCG) is a grounded language learning procedure that leverages visual supervision to improve textual representations.
LCG outperforms standard language-only models in learning efficiency.
It improves upon vision-and-language learning procedures including CLIP, GIT, Flamingo, and Vokenization.
arXiv Detail & Related papers (2024-03-21T16:52:01Z) - What Makes a Language Easy to Deep-Learn? [5.871583927216651]
A fundamental property of language is its compositional structure, allowing humans to produce forms for new meanings.
For humans, languages with more compositional and transparent structures are typically easier to learn than those with opaque and irregular structures.
This learnability advantage has not yet been shown for deep neural networks, limiting their use as models for human language learning.
arXiv Detail & Related papers (2023-02-23T18:57:34Z) - Communication Drives the Emergence of Language Universals in Neural
Agents: Evidence from the Word-order/Case-marking Trade-off [3.631024220680066]
We propose a new Neural-agent Language Learning and Communication framework (NeLLCom) where pairs of speaking and listening agents first learn a miniature language.
We succeed in replicating the trade-off with the new framework without hard-coding specific biases in the agents.
arXiv Detail & Related papers (2023-01-30T17:22:33Z) - Linking Emergent and Natural Languages via Corpus Transfer [98.98724497178247]
We propose a novel way to establish a link by corpus transfer between emergent languages and natural languages.
Our approach showcases non-trivial transfer benefits for two different tasks -- language modeling and image captioning.
We also introduce a novel metric to predict the transferability of an emergent language by translating emergent messages to natural language captions grounded on the same images.
arXiv Detail & Related papers (2022-03-24T21:24:54Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - Bridging Linguistic Typology and Multilingual Machine Translation with
Multi-View Language Representations [83.27475281544868]
We use singular vector canonical correlation analysis to study what kind of information is induced from each source.
We observe that our representations embed typology and strengthen correlations with language relationships.
We then take advantage of our multi-view language vector space for multilingual machine translation, where we achieve competitive overall translation accuracy.
arXiv Detail & Related papers (2020-04-30T16:25:39Z) - Meta-Transfer Learning for Code-Switched Speech Recognition [72.84247387728999]
We propose a new learning method, meta-transfer learning, to transfer learn on a code-switched speech recognition system in a low-resource setting.
Our model learns to recognize individual languages, and transfer them so as to better recognize mixed-language speech by conditioning the optimization on the code-switching data.
arXiv Detail & Related papers (2020-04-29T14:27:19Z) - Compositionality and Generalization in Emergent Languages [42.68870559695238]
We study whether the language emerging in deep multi-agent simulations possesses a similar ability to refer to novel primitive combinations.
We find no correlation between the degree of compositionality of an emergent language and its ability to generalize.
The more compositional a language is, the more easily it will be picked up by new learners.
arXiv Detail & Related papers (2020-04-20T08:30:14Z) - Compositional Languages Emerge in a Neural Iterated Learning Model [27.495624644227888]
compositionality enables natural language to represent complex concepts via a structured combination of simpler ones.
We propose an effective neural iterated learning (NIL) algorithm that, when applied to interacting neural agents, facilitates the emergence of a more structured type of language.
arXiv Detail & Related papers (2020-02-04T15:19:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.