Child-directed Listening: How Caregiver Inference Enables Children's
Early Verbal Communication
- URL: http://arxiv.org/abs/2102.03462v2
- Date: Tue, 9 Feb 2021 06:35:47 GMT
- Title: Child-directed Listening: How Caregiver Inference Enables Children's
Early Verbal Communication
- Authors: Stephan C. Meylan, Ruthe Foushee, Elika Bergelson, Roger P. Levy
- Abstract summary: We employ a suite of Bayesian models of spoken word recognition to understand how adults overcome the noisiness of child language.
By evaluating competing models on phonetically-annotated corpora, we show that adults' recovered meanings are best predicted by prior expectations fitted specifically to the child language environment.
- Score: 2.9331097393290837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How do adults understand children's speech? Children's productions over the
course of language development often bear little resemblance to typical adult
pronunciations, yet caregivers nonetheless reliably recover meaning from them.
Here, we employ a suite of Bayesian models of spoken word recognition to
understand how adults overcome the noisiness of child language, showing that
communicative success between children and adults relies heavily on adult
inferential processes. By evaluating competing models on phonetically-annotated
corpora, we show that adults' recovered meanings are best predicted by prior
expectations fitted specifically to the child language environment, rather than
to typical adult-adult language. After quantifying the contribution of this
"child-directed listening" over developmental time, we discuss the consequences
for theories of language acquisition, as well as the implications for
commonly-used methods for assessing children's linguistic proficiency.
Related papers
- Towards Developmentally Plausible Rewards: Communicative Success as a Learning Signal for Interactive Language Models [49.22720751953838]
We propose a method for training language models in an interactive setting inspired by child language acquisition.<n>In our setting, a speaker attempts to communicate some information to a listener in a single-turn dialogue and receives a reward if communicative success is achieved.
arXiv Detail & Related papers (2025-05-09T11:48:36Z) - Developmental Predictive Coding Model for Early Infancy Mono and Bilingual Vocal Continual Learning [69.8008228833895]
We propose a small-sized generative neural network equipped with a continual learning mechanism.
Our model prioritizes interpretability and demonstrates the advantages of online learning.
arXiv Detail & Related papers (2024-12-23T10:23:47Z) - Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning [84.94709351266557]
We focus on the trustworthiness of language models with respect to retrieval augmentation.
We deem that retrieval-augmented language models have the inherent capabilities of supplying response according to both contextual and parametric knowledge.
Inspired by aligning language models with human preference, we take the first step towards aligning retrieval-augmented language models to a status where it responds relying merely on the external evidence.
arXiv Detail & Related papers (2024-10-22T09:25:21Z) - A model of early word acquisition based on realistic-scale audiovisual naming events [10.047470656294333]
We studied the extent to which early words can be acquired through statistical learning from regularities in audiovisual sensory input.
We simulated word learning in infants up to 12 months of age in a realistic setting, using a model that learns from statistical regularities in raw speech and pixel-level visual input.
Results show that the model effectively learns to recognize words and associate them with corresponding visual objects, with a vocabulary growth rate comparable to that observed in infants.
arXiv Detail & Related papers (2024-06-07T21:05:59Z) - Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech
Emotion Recognition [48.29355616574199]
We analyze the transferability of emotion recognition across three different languages--English, Mandarin Chinese, and Cantonese.
This study concludes that different language and age groups require specific speech features, thus making cross-lingual inference an unsuitable method.
arXiv Detail & Related papers (2023-06-26T08:48:08Z) - BabySLM: language-acquisition-friendly benchmark of self-supervised
spoken language models [56.93604813379634]
Self-supervised techniques for learning speech representations have been shown to develop linguistic competence from exposure to speech without the need for human labels.
We propose a language-acquisition-friendly benchmark to probe spoken language models at the lexical and syntactic levels.
We highlight two exciting challenges that need to be addressed for further progress: bridging the gap between text and speech and between clean speech and in-the-wild speech.
arXiv Detail & Related papers (2023-06-02T12:54:38Z) - Computational Language Acquisition with Theory of Mind [84.2267302901888]
We build language-learning agents equipped with Theory of Mind (ToM) and measure its effects on the learning process.
We find that training speakers with a highly weighted ToM listener component leads to performance gains in our image referential game setting.
arXiv Detail & Related papers (2023-03-02T18:59:46Z) - Improving Children's Speech Recognition by Fine-tuning Self-supervised
Adult Speech Representations [2.2191297646252646]
Children's speech recognition is a vital, yet largely overlooked domain when building inclusive speech technologies.
Recent advances in self-supervised learning have created a new opportunity for overcoming this problem of data scarcity.
We leverage self-supervised adult speech representations and use three well-known child speech corpora to build models for children's speech recognition.
arXiv Detail & Related papers (2022-11-14T22:03:36Z) - How Adults Understand What Young Children Say [1.416276307599112]
Children's early speech often bears little resemblance to adult speech in form or content, and yet caregivers often find meaning in young children's utterances.
We propose that successful early communication relies not just on children's growing linguistic knowledge, but also on adults' sophisticated inferences.
arXiv Detail & Related papers (2022-06-15T20:37:32Z) - TalkTive: A Conversational Agent Using Backchannels to Engage Older
Adults in Neurocognitive Disorders Screening [51.97352212369947]
We analyzed 246 conversations of cognitive assessments between older adults and human assessors.
We derived the categories of reactive backchannels and proactive backchannels.
This is used in the development of TalkTive, a CA which can predict both timing and form of backchanneling.
arXiv Detail & Related papers (2022-02-16T17:55:34Z) - Perception Point: Identifying Critical Learning Periods in Speech for
Bilingual Networks [58.24134321728942]
We compare and identify cognitive aspects on deep neural-based visual lip-reading models.
We observe a strong correlation between these theories in cognitive psychology and our unique modeling.
arXiv Detail & Related papers (2021-10-13T05:30:50Z) - Analysis of Disfluency in Children's Speech [25.68434431663045]
We present a novel dataset with annotated disfluencies of spontaneous explanations from 26 children (ages 5--8)
Children have higher disfluency and filler rates, tend to use nasal filled pauses more frequently, and on average exhibit longer reparandums than repairs.
Despite the differences, an automatic disfluency detection system trained on adult (Switchboard) speech transcripts performs reasonably well on children's speech.
arXiv Detail & Related papers (2020-10-08T22:51:25Z) - Learning to Understand Child-directed and Adult-directed Speech [18.29692441616062]
Human language acquisition research indicates that child-directed speech helps language learners.
We compare the task performance of models trained on adult-directed speech (ADS) and child-directed speech (CDS)
We find indications that CDS helps in the initial stages of learning, but eventually, models trained on ADS reach comparable task performance, and generalize better.
arXiv Detail & Related papers (2020-05-06T10:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.