An iterated learning model of language change that mixes supervised and unsupervised learning
- URL: http://arxiv.org/abs/2405.20818v3
- Date: Wed, 27 Nov 2024 16:53:31 GMT
- Title: An iterated learning model of language change that mixes supervised and unsupervised learning
- Authors: Jack Bunyan, Seth Bullock, Conor Houghton,
- Abstract summary: The iterated learning model is an agent model which simulates the transmission of of language from generation to generation.
In each iteration, a language tutor exposes a na"ive pupil to a limited training set of utterances, each pairing a random meaning with the signal that conveys it.
The transmission bottleneck ensures that tutors must generalize beyond the training set that they experienced.
- Score: 0.0
- License:
- Abstract: The iterated learning model is an agent model which simulates the transmission of of language from generation to generation. It is used to study how the language adapts to pressures imposed by transmission. In each iteration, a language tutor exposes a na\"ive pupil to a limited training set of utterances, each pairing a random meaning with the signal that conveys it. Then the pupil becomes a tutor for a new na\"ive pupil in the next iteration. The transmission bottleneck ensures that tutors must generalize beyond the training set that they experienced. Repeated cycles of learning and generalization can result in a language that is expressive, compositional and stable. Previously, the agents in the iterated learning model mapped signals to meanings using an artificial neural network but relied on an unrealistic and computationally expensive process of obversion to map meanings to signals. Here, both maps are neural networks, trained separately through supervised learning and together through unsupervised learning in the form of an autoencoder. This avoids the computational burden entailed in obversion and introduces a mixture of supervised and unsupervised learning as observed during language learning in children. The new model demonstrates a linear relationship between the dimensionality of meaning-signal space and effective bottleneck size and suggests that internal reflection on potential utterances is important in language learning and evolution.
Related papers
- Developmental Predictive Coding Model for Early Infancy Mono and Bilingual Vocal Continual Learning [69.8008228833895]
We propose a small-sized generative neural network equipped with a continual learning mechanism.
Our model prioritizes interpretability and demonstrates the advantages of online learning.
arXiv Detail & Related papers (2024-12-23T10:23:47Z) - Visual Grounding Helps Learn Word Meanings in Low-Data Regimes [47.7950860342515]
Modern neural language models (LMs) are powerful tools for modeling human sentence production and comprehension.
But to achieve these results, LMs must be trained in distinctly un-human-like ways.
Do models trained more naturalistically -- with grounded supervision -- exhibit more humanlike language learning?
We investigate this question in the context of word learning, a key sub-task in language acquisition.
arXiv Detail & Related papers (2023-10-20T03:33:36Z) - Meta predictive learning model of languages in neural circuits [2.5690340428649328]
We propose a mean-field learning model within the predictive coding framework.
Our model reveals that most of the connections become deterministic after learning.
Our model provides a starting point to investigate the connection among brain computation, next-token prediction and general intelligence.
arXiv Detail & Related papers (2023-09-08T03:58:05Z) - Learning to Model the World with Language [100.76069091703505]
To interact with humans and act in the world, agents need to understand the range of language that people use and relate it to the visual world.
Our key idea is that agents should interpret such diverse language as a signal that helps them predict the future.
We instantiate this in Dynalang, an agent that learns a multimodal world model to predict future text and image representations.
arXiv Detail & Related papers (2023-07-31T17:57:49Z) - Retentive or Forgetful? Diving into the Knowledge Memorizing Mechanism
of Language Models [49.39276272693035]
Large-scale pre-trained language models have shown remarkable memorizing ability.
Vanilla neural networks without pre-training have been long observed suffering from the catastrophic forgetting problem.
We find that 1) Vanilla language models are forgetful; 2) Pre-training leads to retentive language models; 3) Knowledge relevance and diversification significantly influence the memory formation.
arXiv Detail & Related papers (2023-05-16T03:50:38Z) - Communication Drives the Emergence of Language Universals in Neural
Agents: Evidence from the Word-order/Case-marking Trade-off [3.631024220680066]
We propose a new Neural-agent Language Learning and Communication framework (NeLLCom) where pairs of speaking and listening agents first learn a miniature language.
We succeed in replicating the trade-off with the new framework without hard-coding specific biases in the agents.
arXiv Detail & Related papers (2023-01-30T17:22:33Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - Generative Adversarial Phonology: Modeling unsupervised phonetic and
phonological learning with neural networks [0.0]
Training deep neural networks on well-understood dependencies in speech data can provide new insights into how they learn internal representations.
This paper argues that acquisition of speech can be modeled as a dependency between random space and generated speech data in the Generative Adversarial Network architecture.
We propose a methodology to uncover the network's internal representations that correspond to phonetic and phonological properties.
arXiv Detail & Related papers (2020-06-06T20:31:23Z) - The large learning rate phase of deep learning: the catapult mechanism [50.23041928811575]
We present a class of neural networks with solvable training dynamics.
We find good agreement between our model's predictions and training dynamics in realistic deep learning settings.
We believe our results shed light on characteristics of models trained at different learning rates.
arXiv Detail & Related papers (2020-03-04T17:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.