How Adults Understand What Young Children Say
- URL: http://arxiv.org/abs/2206.07807v1
- Date: Wed, 15 Jun 2022 20:37:32 GMT
- Title: How Adults Understand What Young Children Say
- Authors: Stephan C. Meylan, Ruthe Foushee, Nicole H. Wong, Elika Bergelson, and
Roger P. Levy
- Abstract summary: Children's early speech often bears little resemblance to adult speech in form or content, and yet caregivers often find meaning in young children's utterances.
We propose that successful early communication relies not just on children's growing linguistic knowledge, but also on adults' sophisticated inferences.
- Score: 1.416276307599112
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Children's early speech often bears little resemblance to adult speech in
form or content, and yet caregivers often find meaning in young children's
utterances. Precisely how caregivers are able to do this remains poorly
understood. We propose that successful early communication (an essential
building block of language development) relies not just on children's growing
linguistic knowledge, but also on adults' sophisticated inferences. These
inferences, we further propose, are optimized for fine-grained details of how
children speak. We evaluate these ideas using a set of candidate computational
models of spoken word recognition based on deep learning and Bayesian
inference, which instantiate competing hypotheses regarding the information
sources used by adults to understand children. We find that the best-performing
models (evaluated on datasets of adult interpretations of child speech) are
those that have strong prior expectations about what children are likely to
want to communicate, rather than the actual phonetic contents of what children
say. We further find that adults' behavior is best characterized as well-tuned
to specific children: the more closely a word recognition model is tuned to the
particulars of an individual child's actual linguistic behavior, the better it
predicts adults' inferences about what the child has said. These results offer
a comprehensive investigation into the role of caregivers as child-directed
listeners, with broader consequences for theories of language acquisition.
Related papers
- Developmental Predictive Coding Model for Early Infancy Mono and Bilingual Vocal Continual Learning [69.8008228833895]
We propose a small-sized generative neural network equipped with a continual learning mechanism.
Our model prioritizes interpretability and demonstrates the advantages of online learning.
arXiv Detail & Related papers (2024-12-23T10:23:47Z) - Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning [84.94709351266557]
We focus on the trustworthiness of language models with respect to retrieval augmentation.
We deem that retrieval-augmented language models have the inherent capabilities of supplying response according to both contextual and parametric knowledge.
Inspired by aligning language models with human preference, we take the first step towards aligning retrieval-augmented language models to a status where it responds relying merely on the external evidence.
arXiv Detail & Related papers (2024-10-22T09:25:21Z) - KidLM: Advancing Language Models for Children -- Early Insights and Future Directions [7.839083566878183]
We introduce a novel user-centric data collection pipeline that involves gathering and validating a corpus specifically written for and sometimes by children.
We propose a new training objective, Stratified Masking, which dynamically adjusts masking probabilities based on our domain-specific child language data.
Experimental evaluations demonstrate that our model excels in understanding lower grade-level text, maintains safety by avoiding stereotypes, and captures children's unique preferences.
arXiv Detail & Related papers (2024-10-04T19:35:44Z) - Visual Grounding Helps Learn Word Meanings in Low-Data Regimes [47.7950860342515]
Modern neural language models (LMs) are powerful tools for modeling human sentence production and comprehension.
But to achieve these results, LMs must be trained in distinctly un-human-like ways.
Do models trained more naturalistically -- with grounded supervision -- exhibit more humanlike language learning?
We investigate this question in the context of word learning, a key sub-task in language acquisition.
arXiv Detail & Related papers (2023-10-20T03:33:36Z) - BabySLM: language-acquisition-friendly benchmark of self-supervised
spoken language models [56.93604813379634]
Self-supervised techniques for learning speech representations have been shown to develop linguistic competence from exposure to speech without the need for human labels.
We propose a language-acquisition-friendly benchmark to probe spoken language models at the lexical and syntactic levels.
We highlight two exciting challenges that need to be addressed for further progress: bridging the gap between text and speech and between clean speech and in-the-wild speech.
arXiv Detail & Related papers (2023-06-02T12:54:38Z) - Computational Language Acquisition with Theory of Mind [84.2267302901888]
We build language-learning agents equipped with Theory of Mind (ToM) and measure its effects on the learning process.
We find that training speakers with a highly weighted ToM listener component leads to performance gains in our image referential game setting.
arXiv Detail & Related papers (2023-03-02T18:59:46Z) - Improving Children's Speech Recognition by Fine-tuning Self-supervised
Adult Speech Representations [2.2191297646252646]
Children's speech recognition is a vital, yet largely overlooked domain when building inclusive speech technologies.
Recent advances in self-supervised learning have created a new opportunity for overcoming this problem of data scarcity.
We leverage self-supervised adult speech representations and use three well-known child speech corpora to build models for children's speech recognition.
arXiv Detail & Related papers (2022-11-14T22:03:36Z) - Perception Point: Identifying Critical Learning Periods in Speech for
Bilingual Networks [58.24134321728942]
We compare and identify cognitive aspects on deep neural-based visual lip-reading models.
We observe a strong correlation between these theories in cognitive psychology and our unique modeling.
arXiv Detail & Related papers (2021-10-13T05:30:50Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Child-directed Listening: How Caregiver Inference Enables Children's
Early Verbal Communication [2.9331097393290837]
We employ a suite of Bayesian models of spoken word recognition to understand how adults overcome the noisiness of child language.
By evaluating competing models on phonetically-annotated corpora, we show that adults' recovered meanings are best predicted by prior expectations fitted specifically to the child language environment.
arXiv Detail & Related papers (2021-02-06T00:54:34Z) - Using Known Words to Learn More Words: A Distributional Analysis of
Child Vocabulary Development [0.0]
We investigated item-based variability in vocabulary development using lexical properties of distributional statistics.
We predicted word trajectories cross-sectionally, shedding light on trends in vocabulary development that may not have been evident at a single time point.
We also show that whether one looks at a single age group or across ages as a whole, the best distributional predictor of whether a child knows a word is the number of other known words with which that word tends to co-occur.
arXiv Detail & Related papers (2020-09-15T01:18:21Z) - Learning to Understand Child-directed and Adult-directed Speech [18.29692441616062]
Human language acquisition research indicates that child-directed speech helps language learners.
We compare the task performance of models trained on adult-directed speech (ADS) and child-directed speech (CDS)
We find indications that CDS helps in the initial stages of learning, but eventually, models trained on ADS reach comparable task performance, and generalize better.
arXiv Detail & Related papers (2020-05-06T10:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.