Co-evolution of language and agents in referential games
- URL: http://arxiv.org/abs/2001.03361v3
- Date: Sat, 30 Jan 2021 09:33:04 GMT
- Title: Co-evolution of language and agents in referential games
- Authors: Gautier Dagan, Dieuwke Hupkes and Elia Bruni
- Abstract summary: We show that the optimal situation is to take into account the learning biases of the language learners and thus let language and agents co-evolve.
We pave the way to investigate the co-evolution of language in language emergence studies.
- Score: 24.708802957946467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Referential games offer a grounded learning environment for neural agents
which accounts for the fact that language is functionally used to communicate.
However, they do not take into account a second constraint considered to be
fundamental for the shape of human language: that it must be learnable by new
language learners.
Cogswell et al. (2019) introduced cultural transmission within referential
games through a changing population of agents to constrain the emerging
language to be learnable. However, the resulting languages remain inherently
biased by the agents' underlying capabilities.
In this work, we introduce Language Transmission Engine to model both
cultural and architectural evolution in a population of agents. As our core
contribution, we empirically show that the optimal situation is to take into
account also the learning biases of the language learners and thus let language
and agents co-evolve. When we allow the agent population to evolve through
architectural evolution, we achieve across the board improvements on all
considered metrics and surpass the gains made with cultural transmission. These
results stress the importance of studying the underlying agent architecture and
pave the way to investigate the co-evolution of language and agent in language
emergence studies.
Related papers
- Teaching Embodied Reinforcement Learning Agents: Informativeness and Diversity of Language Use [16.425032085699698]
It is desirable for embodied agents to have the ability to leverage human language to gain explicit or implicit knowledge for learning tasks.
It's not clear how to incorporate rich language use to facilitate task learning.
This paper studies different types of language inputs in facilitating reinforcement learning.
arXiv Detail & Related papers (2024-10-31T17:59:52Z) - Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning [84.94709351266557]
We focus on the trustworthiness of language models with respect to retrieval augmentation.
We deem that retrieval-augmented language models have the inherent capabilities of supplying response according to both contextual and parametric knowledge.
Inspired by aligning language models with human preference, we take the first step towards aligning retrieval-augmented language models to a status where it responds relying merely on the external evidence.
arXiv Detail & Related papers (2024-10-22T09:25:21Z) - Symbolic Learning Enables Self-Evolving Agents [55.625275970720374]
We introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own.
Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning.
We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks.
arXiv Detail & Related papers (2024-06-26T17:59:18Z) - Unveiling the pressures underlying language learning and use in neural networks, large language models, and humans: Lessons from emergent machine-to-machine communication [5.371337604556311]
We review three cases where mismatches between the emergent linguistic behavior of neural agents and humans were resolved.
We identify key pressures at play for language learning and emergence: communicative success, production effort, learnability, and other psycho-/sociolinguistic factors.
arXiv Detail & Related papers (2024-03-21T14:33:34Z) - Learning to Model the World with Language [100.76069091703505]
To interact with humans and act in the world, agents need to understand the range of language that people use and relate it to the visual world.
Our key idea is that agents should interpret such diverse language as a signal that helps them predict the future.
We instantiate this in Dynalang, an agent that learns a multimodal world model to predict future text and image representations.
arXiv Detail & Related papers (2023-07-31T17:57:49Z) - Transforming Human-Centered AI Collaboration: Redefining Embodied Agents
Capabilities through Interactive Grounded Language Instructions [23.318236094953072]
Human intelligence's adaptability is remarkable, allowing us to adjust to new tasks and multi-modal environments swiftly.
The research community is actively pursuing the development of interactive "embodied agents"
These agents must possess the ability to promptly request feedback in case communication breaks down or instructions are unclear.
arXiv Detail & Related papers (2023-05-18T07:51:33Z) - Computational Language Acquisition with Theory of Mind [84.2267302901888]
We build language-learning agents equipped with Theory of Mind (ToM) and measure its effects on the learning process.
We find that training speakers with a highly weighted ToM listener component leads to performance gains in our image referential game setting.
arXiv Detail & Related papers (2023-03-02T18:59:46Z) - Communication Drives the Emergence of Language Universals in Neural
Agents: Evidence from the Word-order/Case-marking Trade-off [3.631024220680066]
We propose a new Neural-agent Language Learning and Communication framework (NeLLCom) where pairs of speaking and listening agents first learn a miniature language.
We succeed in replicating the trade-off with the new framework without hard-coding specific biases in the agents.
arXiv Detail & Related papers (2023-01-30T17:22:33Z) - Linking Emergent and Natural Languages via Corpus Transfer [98.98724497178247]
We propose a novel way to establish a link by corpus transfer between emergent languages and natural languages.
Our approach showcases non-trivial transfer benefits for two different tasks -- language modeling and image captioning.
We also introduce a novel metric to predict the transferability of an emergent language by translating emergent messages to natural language captions grounded on the same images.
arXiv Detail & Related papers (2022-03-24T21:24:54Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Emergent Multi-Agent Communication in the Deep Learning Era [26.764052787245728]
The ability to cooperate through language is a defining feature of humans.
As the perceptual, motory and planning capabilities of deep artificial networks increase, researchers are studying whether they also can develop a shared language to interact.
arXiv Detail & Related papers (2020-06-03T17:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.