An Attention-Based Model for Predicting Contextual Informativeness and
Curriculum Learning Applications
- URL: http://arxiv.org/abs/2204.09885v2
- Date: Thu, 9 Nov 2023 06:57:41 GMT
- Title: An Attention-Based Model for Predicting Contextual Informativeness and
Curriculum Learning Applications
- Authors: Sungjin Nam, David Jurgens, Gwen Frishkoff, Kevyn Collins-Thompson
- Abstract summary: We develop models for estimating contextual informativeness, focusing on the instructional aspect of sentences.
We show how our model identifies key contextual elements in a sentence that are likely to contribute most to a reader's understanding of the target word.
We believe our results open new possibilities for applications that support language learning for both human and machine learners.
- Score: 11.775048147405725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Both humans and machines learn the meaning of unknown words through
contextual information in a sentence, but not all contexts are equally helpful
for learning. We introduce an effective method for capturing the level of
contextual informativeness with respect to a given target word. Our study makes
three main contributions. First, we develop models for estimating contextual
informativeness, focusing on the instructional aspect of sentences. Our
attention-based approach using pre-trained embeddings demonstrates
state-of-the-art performance on our single-context dataset and an existing
multi-sentence context dataset. Second, we show how our model identifies key
contextual elements in a sentence that are likely to contribute most to a
reader's understanding of the target word. Third, we examine how our contextual
informativeness model, originally developed for vocabulary learning
applications for students, can be used for developing better training curricula
for word embedding models in batch learning and few-shot machine learning
settings. We believe our results open new possibilities for applications that
support language learning for both human and machine learners.
Related papers
- Autoregressive Pre-Training on Pixels and Texts [35.82610192457444]
We explore the dual modality of language--both visual and textual--within an autoregressive framework, pre-trained on both document images and texts.
Our method employs a multimodal training strategy, utilizing visual data through next patch prediction with a regression head and/or textual data through next token prediction with a classification head.
We find that a unidirectional pixel-based model trained solely on visual data can achieve comparable results to state-of-the-art bidirectional models on several language understanding tasks.
arXiv Detail & Related papers (2024-04-16T16:36:50Z) - Large Language Model Augmented Exercise Retrieval for Personalized
Language Learning [2.946562343070891]
We find that vector similarity approaches poorly capture the relationship between exercise content and the language that learners use to express what they want to learn.
We leverage the generative capabilities of large language models to bridge the gap by synthesizing hypothetical exercises based on the learner's input.
Our approach, which we call mHyER, overcomes three challenges: (1) lack of relevance labels for training, (2) unrestricted learner input content, and (3) low semantic similarity between input and retrieval candidates.
arXiv Detail & Related papers (2024-02-08T20:35:31Z) - Less is More: A Closer Look at Semantic-based Few-Shot Learning [11.724194320966959]
Few-shot Learning aims to learn and distinguish new categories with a very limited number of available images.
We propose a simple but effective framework for few-shot learning tasks, specifically designed to exploit the textual information and language model.
Our experiments conducted across four widely used few-shot datasets demonstrate that our simple framework achieves impressive results.
arXiv Detail & Related papers (2024-01-10T08:56:02Z) - Storyfier: Exploring Vocabulary Learning Support with Text Generation
Models [52.58844741797822]
We develop Storyfier to provide a coherent context for any target words of learners' interests.
learners generally favor the generated stories for connecting target words and writing assistance for easing their learning workload.
In read-cloze-write learning sessions, participants using Storyfier perform worse in recalling and using target words than learning with a baseline tool without our AI features.
arXiv Detail & Related papers (2023-08-07T18:25:00Z) - Human Inspired Progressive Alignment and Comparative Learning for
Grounded Word Acquisition [6.47452771256903]
We take inspiration from how human babies acquire their first language, and developed a computational process for word acquisition through comparative learning.
Motivated by cognitive findings, we generated a small dataset that enables the computation models to compare the similarities and differences of various attributes.
We frame the acquisition of words as not only the information filtration process, but also as representation-symbol mapping.
arXiv Detail & Related papers (2023-07-05T19:38:04Z) - Towards Open Vocabulary Learning: A Survey [146.90188069113213]
Deep neural networks have made impressive advancements in various core tasks like segmentation, tracking, and detection.
Recently, open vocabulary settings were proposed due to the rapid progress of vision language pre-training.
This paper provides a thorough review of open vocabulary learning, summarizing and analyzing recent developments in the field.
arXiv Detail & Related papers (2023-06-28T02:33:06Z) - VidLanKD: Improving Language Understanding via Video-Distilled Knowledge
Transfer [76.3906723777229]
We present VidLanKD, a video-language knowledge distillation method for improving language understanding.
We train a multi-modal teacher model on a video-text dataset, and then transfer its knowledge to a student language model with a text dataset.
In our experiments, VidLanKD achieves consistent improvements over text-only language models and vokenization models.
arXiv Detail & Related papers (2021-07-06T15:41:32Z) - Neuro-Symbolic Representations for Video Captioning: A Case for
Leveraging Inductive Biases for Vision and Language [148.0843278195794]
We propose a new model architecture for learning multi-modal neuro-symbolic representations for video captioning.
Our approach uses a dictionary learning-based method of learning relations between videos and their paired text descriptions.
arXiv Detail & Related papers (2020-11-18T20:21:19Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.