Shaping Shared Languages: Human and Large Language Models' Inductive Biases in Emergent Communication
- URL: http://arxiv.org/abs/2503.04395v1
- Date: Thu, 06 Mar 2025 12:47:54 GMT
- Title: Shaping Shared Languages: Human and Large Language Models' Inductive Biases in Emergent Communication
- Authors: Tom Kouwenhoven, Max Peeperkorn, Roy de Kleijn, Tessa Verhoef,
- Abstract summary: We investigate how artificial languages evolve when optimised for inductive biases in humans and large language models (LLMs)<n>We show that referentially grounded vocabularies emerge that enable reliable communication in all conditions, even when humans and LLMs collaborate.
- Score: 0.09999629695552195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Languages are shaped by the inductive biases of their users. Using a classical referential game, we investigate how artificial languages evolve when optimised for inductive biases in humans and large language models (LLMs) via Human-Human, LLM-LLM and Human-LLM experiments. We show that referentially grounded vocabularies emerge that enable reliable communication in all conditions, even when humans and LLMs collaborate. Comparisons between conditions reveal that languages optimised for LLMs subtly differ from those optimised for humans. Interestingly, interactions between humans and LLMs alleviate these differences and result in vocabularies which are more human-like than LLM-like. These findings advance our understanding of how inductive biases in LLMs play a role in the dynamic nature of human language and contribute to maintaining alignment in human and machine communication. In particular, our work underscores the need to think of new methods that include human interaction in the training processes of LLMs, and shows that using communicative success as a reward signal can be a fruitful, novel direction.
Related papers
- LLMs syntactically adapt their language use to their conversational partner [58.92470092706263]
It has been frequently observed that human speakers align their language use with each other during conversations.
We construct a corpus of conversations between large language models (LLMs) and find that two LLM agents end up making more similar syntactic choices as conversations go on.
arXiv Detail & Related papers (2025-03-10T15:37:07Z) - How Deep is Love in LLMs' Hearts? Exploring Semantic Size in Human-like Cognition [75.11808682808065]
This study investigates whether large language models (LLMs) exhibit similar tendencies in understanding semantic size.
Our findings reveal that multi-modal training is crucial for LLMs to achieve more human-like understanding.
Lastly, we examine whether LLMs are influenced by attention-grabbing headlines with larger semantic sizes in a real-world web shopping scenario.
arXiv Detail & Related papers (2025-03-01T03:35:56Z) - Child vs. machine language learning: Can the logical structure of human language unleash LLMs? [0.0]
We argue that human language learning proceeds in a manner that is different in nature from current approaches to training LLMs.<n>We present evidence from German plural formation by LLMs that confirm our hypothesis that even very powerful implementations produce results that miss aspects of the logic inherent to language that humans have no problem with.
arXiv Detail & Related papers (2025-02-24T16:40:46Z) - HumT DumT: Measuring and controlling human-like language in LLMs [29.82328120944693]
Human-like language might improve user experience, but might also lead to overreliance and stereotyping.<n>We introduce HumT and SocioT, metrics for human-like tone and other dimensions of social perceptions in text data.<n>By measuring HumT across preference and usage datasets, we find that users prefer less human-like outputs from LLMs.
arXiv Detail & Related papers (2025-02-18T20:04:09Z) - Searching for Structure: Investigating Emergent Communication with Large Language Models [0.10923877073891446]
We simulate a classical referential game in which Large Language Models learn and use artificial languages.
Our results show that initially unstructured holistic languages are indeed shaped to have some structural properties that allow two LLM agents to communicate successfully.
arXiv Detail & Related papers (2024-12-10T16:32:19Z) - HLB: Benchmarking LLMs' Humanlikeness in Language Use [2.438748974410787]
We present a comprehensive humanlikeness benchmark (HLB) evaluating 20 large language models (LLMs)
We collected responses from over 2,000 human participants and compared them to outputs from the LLMs in these experiments.
Our results reveal fine-grained differences in how well LLMs replicate human responses across various linguistic levels.
arXiv Detail & Related papers (2024-09-24T09:02:28Z) - No Such Thing as a General Learner: Language models and their dual optimization [3.2228025627337864]
We argue that neither humans nor LLMs are general learners, in a variety of senses.
We argue that the performance of LLMs, whether similar or dissimilar to that of humans, does not weigh easily on important debates about the importance of human cognitive biases for language.
arXiv Detail & Related papers (2024-08-18T17:01:42Z) - A Survey on Human Preference Learning for Large Language Models [81.41868485811625]
The recent surge of versatile large language models (LLMs) largely depends on aligning increasingly capable foundation models with human intentions by preference learning.
This survey covers the sources and formats of preference feedback, the modeling and usage of preference signals, as well as the evaluation of the aligned LLMs.
arXiv Detail & Related papers (2024-06-17T03:52:51Z) - Do Large Language Models Mirror Cognitive Language Processing? [43.68923267228057]
Large Language Models (LLMs) have demonstrated remarkable abilities in text comprehension and logical reasoning.<n>Brain cognitive processing signals are typically utilized to study human language processing.
arXiv Detail & Related papers (2024-02-28T03:38:20Z) - Divergences between Language Models and Human Brains [59.100552839650774]
We systematically explore the divergences between human and machine language processing.<n>We identify two domains that LMs do not capture well: social/emotional intelligence and physical commonsense.<n>Our results show that fine-tuning LMs on these domains can improve their alignment with human brain responses.
arXiv Detail & Related papers (2023-11-15T19:02:40Z) - Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations [70.7884839812069]
Large language models (LLMs) have emerged as powerful and general solutions to many natural language tasks.
However, many of the most important applications of language generation are interactive, where an agent has to talk to a person to reach a desired outcome.
In this work, we explore a new method for adapting LLMs with RL for such goal-directed dialogue.
arXiv Detail & Related papers (2023-11-09T18:45:16Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z) - Dissociating language and thought in large language models [52.39241645471213]
Large Language Models (LLMs) have come closest among all models to date to mastering human language.
We ground this distinction in human neuroscience, which has shown that formal and functional competence rely on different neural mechanisms.
Although LLMs are surprisingly good at formal competence, their performance on functional competence tasks remains spotty.
arXiv Detail & Related papers (2023-01-16T22:41:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.