Communication Drives the Emergence of Language Universals in Neural
Agents: Evidence from the Word-order/Case-marking Trade-off
- URL: http://arxiv.org/abs/2301.13083v2
- Date: Thu, 1 Jun 2023 03:54:09 GMT
- Title: Communication Drives the Emergence of Language Universals in Neural
Agents: Evidence from the Word-order/Case-marking Trade-off
- Authors: Yuchen Lian, Arianna Bisazza, Tessa Verhoef
- Abstract summary: We propose a new Neural-agent Language Learning and Communication framework (NeLLCom) where pairs of speaking and listening agents first learn a miniature language.
We succeed in replicating the trade-off with the new framework without hard-coding specific biases in the agents.
- Score: 3.631024220680066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial learners often behave differently from human learners in the
context of neural agent-based simulations of language emergence and change. A
common explanation is the lack of appropriate cognitive biases in these
learners. However, it has also been proposed that more naturalistic settings of
language learning and use could lead to more human-like results. We investigate
this latter account focusing on the word-order/case-marking trade-off, a widely
attested language universal that has proven particularly hard to simulate. We
propose a new Neural-agent Language Learning and Communication framework
(NeLLCom) where pairs of speaking and listening agents first learn a miniature
language via supervised learning, and then optimize it for communication via
reinforcement learning. Following closely the setup of earlier human
experiments, we succeed in replicating the trade-off with the new framework
without hard-coding specific biases in the agents. We see this as an essential
step towards the investigation of language universals with neural learners.
Related papers
- Teaching Embodied Reinforcement Learning Agents: Informativeness and Diversity of Language Use [16.425032085699698]
It is desirable for embodied agents to have the ability to leverage human language to gain explicit or implicit knowledge for learning tasks.
It's not clear how to incorporate rich language use to facilitate task learning.
This paper studies different types of language inputs in facilitating reinforcement learning.
arXiv Detail & Related papers (2024-10-31T17:59:52Z) - Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning [84.94709351266557]
We focus on the trustworthiness of language models with respect to retrieval augmentation.
We deem that retrieval-augmented language models have the inherent capabilities of supplying response according to both contextual and parametric knowledge.
Inspired by aligning language models with human preference, we take the first step towards aligning retrieval-augmented language models to a status where it responds relying merely on the external evidence.
arXiv Detail & Related papers (2024-10-22T09:25:21Z) - Unveiling the pressures underlying language learning and use in neural networks, large language models, and humans: Lessons from emergent machine-to-machine communication [5.371337604556311]
We review three cases where mismatches between the emergent linguistic behavior of neural agents and humans were resolved.
We identify key pressures at play for language learning and emergence: communicative success, production effort, learnability, and other psycho-/sociolinguistic factors.
arXiv Detail & Related papers (2024-03-21T14:33:34Z) - Language Generation from Brain Recordings [68.97414452707103]
We propose a generative language BCI that utilizes the capacity of a large language model and a semantic brain decoder.
The proposed model can generate coherent language sequences aligned with the semantic content of visual or auditory language stimuli.
Our findings demonstrate the potential and feasibility of employing BCIs in direct language generation.
arXiv Detail & Related papers (2023-11-16T13:37:21Z) - Commonsense Knowledge Transfer for Pre-trained Language Models [83.01121484432801]
We introduce commonsense knowledge transfer, a framework to transfer the commonsense knowledge stored in a neural commonsense knowledge model to a general-purpose pre-trained language model.
It first exploits general texts to form queries for extracting commonsense knowledge from the neural commonsense knowledge model.
It then refines the language model with two self-supervised objectives: commonsense mask infilling and commonsense relation prediction.
arXiv Detail & Related papers (2023-06-04T15:44:51Z) - What Artificial Neural Networks Can Tell Us About Human Language
Acquisition [47.761188531404066]
Rapid progress in machine learning for natural language processing has the potential to transform debates about how humans learn language.
To increase the relevance of learnability results from computational models, we need to train model learners without significant advantages over humans.
arXiv Detail & Related papers (2022-08-17T00:12:37Z) - Multi-lingual agents through multi-headed neural networks [0.0]
This paper focuses on cooperative Multi-Agent Reinforcement Learning.
In this context, multiple distinct and incompatible languages can emerge.
We take inspiration from the Continual Learning literature and equip our agents with multi-headed neural networks which enable our agents to be multi-lingual.
arXiv Detail & Related papers (2021-11-22T11:39:42Z) - Calibrate your listeners! Robust communication-based training for
pragmatic speakers [30.731870275051957]
We propose a method that uses a population of neural listeners to regularize speaker training.
We show that language drift originates from the poor uncertainty calibration of a neural listener.
We evaluate both population-based objectives on reference games, and show that the ensemble method with better calibration enables the speaker to generate pragmatic utterances.
arXiv Detail & Related papers (2021-10-11T17:07:38Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Compositional Languages Emerge in a Neural Iterated Learning Model [27.495624644227888]
compositionality enables natural language to represent complex concepts via a structured combination of simpler ones.
We propose an effective neural iterated learning (NIL) algorithm that, when applied to interacting neural agents, facilitates the emergence of a more structured type of language.
arXiv Detail & Related papers (2020-02-04T15:19:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.