Human Inspired Progressive Alignment and Comparative Learning for
Grounded Word Acquisition
- URL: http://arxiv.org/abs/2307.02615v1
- Date: Wed, 5 Jul 2023 19:38:04 GMT
- Title: Human Inspired Progressive Alignment and Comparative Learning for
Grounded Word Acquisition
- Authors: Yuwei Bao, Barrett Martin Lattimer, Joyce Chai
- Abstract summary: We take inspiration from how human babies acquire their first language, and developed a computational process for word acquisition through comparative learning.
Motivated by cognitive findings, we generated a small dataset that enables the computation models to compare the similarities and differences of various attributes.
We frame the acquisition of words as not only the information filtration process, but also as representation-symbol mapping.
- Score: 6.47452771256903
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Human language acquisition is an efficient, supervised, and continual
process. In this work, we took inspiration from how human babies acquire their
first language, and developed a computational process for word acquisition
through comparative learning. Motivated by cognitive findings, we generated a
small dataset that enables the computation models to compare the similarities
and differences of various attributes, learn to filter out and extract the
common information for each shared linguistic label. We frame the acquisition
of words as not only the information filtration process, but also as
representation-symbol mapping. This procedure does not involve a fixed
vocabulary size, nor a discriminative objective, and allows the models to
continually learn more concepts efficiently. Our results in controlled
experiments have shown the potential of this approach for efficient continual
learning of grounded words.
Related papers
- Reframing linguistic bootstrapping as joint inference using visually-grounded grammar induction models [31.006803764376475]
Semantic and syntactic bootstrapping posit that children use their prior knowledge of one linguistic domain, say syntactic relations, to help later acquire another, such as the meanings of new words.
Here, we argue that they are instead both contingent on a more general learning strategy for language acquisition: joint learning.
Using a series of neural visually-grounded grammar induction models, we demonstrate that both syntactic and semantic bootstrapping effects are strongest when syntax and semantics are learnt simultaneously.
arXiv Detail & Related papers (2024-06-17T18:01:06Z) - Self-Supervised Representation Learning with Spatial-Temporal Consistency for Sign Language Recognition [96.62264528407863]
We propose a self-supervised contrastive learning framework to excavate rich context via spatial-temporal consistency.
Inspired by the complementary property of motion and joint modalities, we first introduce first-order motion information into sign language modeling.
Our method is evaluated with extensive experiments on four public benchmarks, and achieves new state-of-the-art performance with a notable margin.
arXiv Detail & Related papers (2024-06-15T04:50:19Z) - Babysit A Language Model From Scratch: Interactive Language Learning by Trials and Demonstrations [15.394018604836774]
We introduce a trial-and-demonstration (TnD) learning framework that incorporates three components: student trials, teacher demonstrations, and a reward conditioned on language competence.
Our experiments reveal that the TnD approach accelerates word acquisition for student models of equal or smaller numbers of parameters.
Our findings suggest that interactive language learning, with teacher demonstrations and student trials, can facilitate efficient word learning in language models.
arXiv Detail & Related papers (2024-05-22T16:57:02Z) - Pixel Sentence Representation Learning [67.4775296225521]
In this work, we conceptualize the learning of sentence-level textual semantics as a visual representation learning process.
We employ visually-grounded text perturbation methods like typos and word order shuffling, resonating with human cognitive patterns, and enabling perturbation to be perceived as continuous.
Our approach is further bolstered by large-scale unsupervised topical alignment training and natural language inference supervision.
arXiv Detail & Related papers (2024-02-13T02:46:45Z) - Enhancing Context Through Contrast [0.4068270792140993]
We propose a novel Context Enhancement step to improve performance on neural machine translation.
Unlike other approaches, we do not explicitly augment the data but view languages as implicit augmentations.
Our method does not learn embeddings from scratch and can be generalised to any set of pre-trained embeddings.
arXiv Detail & Related papers (2024-01-06T22:13:51Z) - Visual Grounding Helps Learn Word Meanings in Low-Data Regimes [47.7950860342515]
Modern neural language models (LMs) are powerful tools for modeling human sentence production and comprehension.
But to achieve these results, LMs must be trained in distinctly un-human-like ways.
Do models trained more naturalistically -- with grounded supervision -- exhibit more humanlike language learning?
We investigate this question in the context of word learning, a key sub-task in language acquisition.
arXiv Detail & Related papers (2023-10-20T03:33:36Z) - Beyond Contrastive Learning: A Variational Generative Model for
Multilingual Retrieval [109.62363167257664]
We propose a generative model for learning multilingual text embeddings.
Our model operates on parallel data in $N$ languages.
We evaluate this method on a suite of tasks including semantic similarity, bitext mining, and cross-lingual question retrieval.
arXiv Detail & Related papers (2022-12-21T02:41:40Z) - Neural Variational Learning for Grounded Language Acquisition [14.567067583556714]
We propose a learning system in which language is grounded in visual percepts without specific pre-defined categories of terms.
We show that this generative approach exhibits promising results in language grounding without pre-specifying visual categories under low resource settings.
arXiv Detail & Related papers (2021-07-20T20:55:02Z) - Neural Abstructions: Abstractions that Support Construction for Grounded
Language Learning [69.1137074774244]
Leveraging language interactions effectively requires addressing limitations in the two most common approaches to language grounding.
We introduce the idea of neural abstructions: a set of constraints on the inference procedure of a label-conditioned generative model.
We show that with this method a user population is able to build a semantic modification for an open-ended house task in Minecraft.
arXiv Detail & Related papers (2021-07-20T07:01:15Z) - Understanding Synonymous Referring Expressions via Contrastive Features [105.36814858748285]
We develop an end-to-end trainable framework to learn contrastive features on the image and object instance levels.
We conduct extensive experiments to evaluate the proposed algorithm on several benchmark datasets.
arXiv Detail & Related papers (2021-04-20T17:56:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.