Competition in Cross-situational Word Learning: A Computational Study
- URL: http://arxiv.org/abs/2012.03370v1
- Date: Sun, 6 Dec 2020 20:32:56 GMT
- Title: Competition in Cross-situational Word Learning: A Computational Study
- Authors: Aida Nematzadeh, Zahra Shekarchi, Thomas L. Griffiths, and Suzanne
Stevenson
- Abstract summary: Children learn word meanings by tapping into the commonalities across different situations in which words are used.
In a set of computational studies, we show that to successfully learn word meanings in the face of uncertainty, a learner needs to use two types of competition.
- Score: 10.069127121936281
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Children learn word meanings by tapping into the commonalities across
different situations in which words are used and overcome the high level of
uncertainty involved in early word learning experiences. In a set of
computational studies, we show that to successfully learn word meanings in the
face of uncertainty, a learner needs to use two types of competition: words
competing for association to a referent when learning from an observation and
referents competing for a word when the word is used.
Related papers
- From cart to truck: meaning shift through words in English in the last two centuries [0.0]
This onomasiological study uses diachronic word embeddings to explore how different words represented the same concepts over time.
We identify shifts in energy, transport, entertainment, and computing domains, revealing connections between language and societal changes.
arXiv Detail & Related papers (2024-08-29T02:05:39Z) - Reframing linguistic bootstrapping as joint inference using visually-grounded grammar induction models [31.006803764376475]
Semantic and syntactic bootstrapping posit that children use their prior knowledge of one linguistic domain, say syntactic relations, to help later acquire another, such as the meanings of new words.
Here, we argue that they are instead both contingent on a more general learning strategy for language acquisition: joint learning.
Using a series of neural visually-grounded grammar induction models, we demonstrate that both syntactic and semantic bootstrapping effects are strongest when syntax and semantics are learnt simultaneously.
arXiv Detail & Related papers (2024-06-17T18:01:06Z) - Storyfier: Exploring Vocabulary Learning Support with Text Generation
Models [52.58844741797822]
We develop Storyfier to provide a coherent context for any target words of learners' interests.
learners generally favor the generated stories for connecting target words and writing assistance for easing their learning workload.
In read-cloze-write learning sessions, participants using Storyfier perform worse in recalling and using target words than learning with a baseline tool without our AI features.
arXiv Detail & Related papers (2023-08-07T18:25:00Z) - Towards Open Vocabulary Learning: A Survey [146.90188069113213]
Deep neural networks have made impressive advancements in various core tasks like segmentation, tracking, and detection.
Recently, open vocabulary settings were proposed due to the rapid progress of vision language pre-training.
This paper provides a thorough review of open vocabulary learning, summarizing and analyzing recent developments in the field.
arXiv Detail & Related papers (2023-06-28T02:33:06Z) - BabySLM: language-acquisition-friendly benchmark of self-supervised
spoken language models [56.93604813379634]
Self-supervised techniques for learning speech representations have been shown to develop linguistic competence from exposure to speech without the need for human labels.
We propose a language-acquisition-friendly benchmark to probe spoken language models at the lexical and syntactic levels.
We highlight two exciting challenges that need to be addressed for further progress: bridging the gap between text and speech and between clean speech and in-the-wild speech.
arXiv Detail & Related papers (2023-06-02T12:54:38Z) - Neighboring Words Affect Human Interpretation of Saliency Explanations [65.29015910991261]
Word-level saliency explanations are often used to communicate feature-attribution in text-based models.
Recent studies found that superficial factors such as word length can distort human interpretation of the communicated saliency scores.
We investigate how the marking of a word's neighboring words affect the explainee's perception of the word's importance in the context of a saliency explanation.
arXiv Detail & Related papers (2023-05-04T09:50:25Z) - Predicting Word Learning in Children from the Performance of Computer
Vision Systems [24.49899952381515]
We show that the age at which children acquire different categories of words is correlated with the performance of visual classification and captioning systems.
The performance of the computer vision systems is correlated with human judgments of the concreteness of words, which are in turn a predictor of children's word learning.
arXiv Detail & Related papers (2022-07-07T22:49:32Z) - Keywords and Instances: A Hierarchical Contrastive Learning Framework
Unifying Hybrid Granularities for Text Generation [59.01297461453444]
We propose a hierarchical contrastive learning mechanism, which can unify hybrid granularities semantic meaning in the input text.
Experiments demonstrate that our model outperforms competitive baselines on paraphrasing, dialogue generation, and storytelling tasks.
arXiv Detail & Related papers (2022-05-26T13:26:03Z) - Using Known Words to Learn More Words: A Distributional Analysis of
Child Vocabulary Development [0.0]
We investigated item-based variability in vocabulary development using lexical properties of distributional statistics.
We predicted word trajectories cross-sectionally, shedding light on trends in vocabulary development that may not have been evident at a single time point.
We also show that whether one looks at a single age group or across ages as a whole, the best distributional predictor of whether a child knows a word is the number of other known words with which that word tends to co-occur.
arXiv Detail & Related papers (2020-09-15T01:18:21Z) - Learning to Recognise Words using Visually Grounded Speech [15.972015648122914]
The model has been trained on pairs of images and spoken captions to create visually grounded embeddings.
We investigate whether such a model can be used to recognise words by embedding isolated words and using them to retrieve images of their visual referents.
arXiv Detail & Related papers (2020-05-31T12:48:37Z) - On Vocabulary Reliance in Scene Text Recognition [79.21737876442253]
Methods perform well on images with words within vocabulary but generalize poorly to images with words outside vocabulary.
We call this phenomenon "vocabulary reliance"
We propose a simple yet effective mutual learning strategy to allow models of two families to learn collaboratively.
arXiv Detail & Related papers (2020-05-08T11:16:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.