Co-evolution of language and agents in referential games
- URL: http://arxiv.org/abs/2001.03361v3
- Date: Sat, 30 Jan 2021 09:33:04 GMT
- Title: Co-evolution of language and agents in referential games
- Authors: Gautier Dagan, Dieuwke Hupkes and Elia Bruni
- Abstract summary: We show that the optimal situation is to take into account the learning biases of the language learners and thus let language and agents co-evolve.
We pave the way to investigate the co-evolution of language in language emergence studies.
- Score: 24.708802957946467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Referential games offer a grounded learning environment for neural agents
which accounts for the fact that language is functionally used to communicate.
However, they do not take into account a second constraint considered to be
fundamental for the shape of human language: that it must be learnable by new
language learners.
Cogswell et al. (2019) introduced cultural transmission within referential
games through a changing population of agents to constrain the emerging
language to be learnable. However, the resulting languages remain inherently
biased by the agents' underlying capabilities.
In this work, we introduce Language Transmission Engine to model both
cultural and architectural evolution in a population of agents. As our core
contribution, we empirically show that the optimal situation is to take into
account also the learning biases of the language learners and thus let language
and agents co-evolve. When we allow the agent population to evolve through
architectural evolution, we achieve across the board improvements on all
considered metrics and surpass the gains made with cultural transmission. These
results stress the importance of studying the underlying agent architecture and
pave the way to investigate the co-evolution of language and agent in language
emergence studies.
Related papers
- Symbolic Learning Enables Self-Evolving Agents [55.625275970720374]
We introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own.
Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning.
We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks.
arXiv Detail & Related papers (2024-06-26T17:59:18Z) - Emergent communication and learning pressures in language models: a language evolution perspective [5.371337604556311]
We find that the emergent communication literature excels at designing and adapting models to recover initially absent linguistic phenomena of natural languages.
We identify key pressures that have recovered initially absent human patterns in emergent communication models.
This may serve as inspiration for how to design language models for language acquisition and language evolution research.
arXiv Detail & Related papers (2024-03-21T14:33:34Z) - Learning to Model the World with Language [100.76069091703505]
To interact with humans and act in the world, agents need to understand the range of language that people use and relate it to the visual world.
Our key idea is that agents should interpret such diverse language as a signal that helps them predict the future.
We instantiate this in Dynalang, an agent that learns a multimodal world model to predict future text and image representations.
arXiv Detail & Related papers (2023-07-31T17:57:49Z) - Transforming Human-Centered AI Collaboration: Redefining Embodied Agents
Capabilities through Interactive Grounded Language Instructions [23.318236094953072]
Human intelligence's adaptability is remarkable, allowing us to adjust to new tasks and multi-modal environments swiftly.
The research community is actively pursuing the development of interactive "embodied agents"
These agents must possess the ability to promptly request feedback in case communication breaks down or instructions are unclear.
arXiv Detail & Related papers (2023-05-18T07:51:33Z) - Computational Language Acquisition with Theory of Mind [84.2267302901888]
We build language-learning agents equipped with Theory of Mind (ToM) and measure its effects on the learning process.
We find that training speakers with a highly weighted ToM listener component leads to performance gains in our image referential game setting.
arXiv Detail & Related papers (2023-03-02T18:59:46Z) - Communication Drives the Emergence of Language Universals in Neural
Agents: Evidence from the Word-order/Case-marking Trade-off [3.631024220680066]
We propose a new Neural-agent Language Learning and Communication framework (NeLLCom) where pairs of speaking and listening agents first learn a miniature language.
We succeed in replicating the trade-off with the new framework without hard-coding specific biases in the agents.
arXiv Detail & Related papers (2023-01-30T17:22:33Z) - Linking Emergent and Natural Languages via Corpus Transfer [98.98724497178247]
We propose a novel way to establish a link by corpus transfer between emergent languages and natural languages.
Our approach showcases non-trivial transfer benefits for two different tasks -- language modeling and image captioning.
We also introduce a novel metric to predict the transferability of an emergent language by translating emergent messages to natural language captions grounded on the same images.
arXiv Detail & Related papers (2022-03-24T21:24:54Z) - Cross-Lingual Ability of Multilingual Masked Language Models: A Study of
Language Structure [54.01613740115601]
We study three language properties: constituent order, composition and word co-occurrence.
Our main conclusion is that the contribution of constituent order and word co-occurrence is limited, while the composition is more crucial to the success of cross-linguistic transfer.
arXiv Detail & Related papers (2022-03-16T07:09:35Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Self-play for Data Efficient Language Acquisition [20.86261546611472]
We exploit the symmetric nature of communication in order to improve the efficiency and quality of language acquisition in learning agents.
We show that using self-play as a substitute for direct supervision enables the agent to transfer its knowledge across roles.
arXiv Detail & Related papers (2020-10-10T02:09:19Z) - Emergent Multi-Agent Communication in the Deep Learning Era [26.764052787245728]
The ability to cooperate through language is a defining feature of humans.
As the perceptual, motory and planning capabilities of deep artificial networks increase, researchers are studying whether they also can develop a shared language to interact.
arXiv Detail & Related papers (2020-06-03T17:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.