Storyfier: Exploring Vocabulary Learning Support with Text Generation
Models
- URL: http://arxiv.org/abs/2308.03864v1
- Date: Mon, 7 Aug 2023 18:25:00 GMT
- Title: Storyfier: Exploring Vocabulary Learning Support with Text Generation
Models
- Authors: Zhenhui Peng, Xingbo Wang, Qiushi Han, Junkai Zhu, Xiaojuan Ma, and
Huamin Qu
- Abstract summary: We develop Storyfier to provide a coherent context for any target words of learners' interests.
learners generally favor the generated stories for connecting target words and writing assistance for easing their learning workload.
In read-cloze-write learning sessions, participants using Storyfier perform worse in recalling and using target words than learning with a baseline tool without our AI features.
- Score: 52.58844741797822
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vocabulary learning support tools have widely exploited existing materials,
e.g., stories or video clips, as contexts to help users memorize each target
word. However, these tools could not provide a coherent context for any target
words of learners' interests, and they seldom help practice word usage. In this
paper, we work with teachers and students to iteratively develop Storyfier,
which leverages text generation models to enable learners to read a generated
story that covers any target words, conduct a story cloze test, and use these
words to write a new story with adaptive AI assistance. Our within-subjects
study (N=28) shows that learners generally favor the generated stories for
connecting target words and writing assistance for easing their learning
workload. However, in the read-cloze-write learning sessions, participants
using Storyfier perform worse in recalling and using target words than learning
with a baseline tool without our AI features. We discuss insights into
supporting learning tasks with generative models.
Related papers
- SmartPhone: Exploring Keyword Mnemonic with Auto-generated Verbal and
Visual Cues [2.8047215329139976]
We propose an end-to-end pipeline for auto-generating verbal and visual cues for keyword mnemonics.
Our approach, an end-to-end pipeline for auto-generating verbal and visual cues, can automatically generate highly memorable cues.
arXiv Detail & Related papers (2023-05-11T20:58:10Z) - Semi-Supervised Lifelong Language Learning [81.0685290973989]
We explore a novel setting, semi-supervised lifelong language learning (SSLL), where a model learns sequentially arriving language tasks with both labeled and unlabeled data.
Specially, we dedicate task-specific modules to alleviate catastrophic forgetting and design two modules to exploit unlabeled data.
Experimental results on various language tasks demonstrate our model's effectiveness and superiority over competitive baselines.
arXiv Detail & Related papers (2022-11-23T15:51:33Z) - Unsupervised Neural Stylistic Text Generation using Transfer learning
and Adapters [66.17039929803933]
We propose a novel transfer learning framework which updates only $0.3%$ of model parameters to learn style specific attributes for response generation.
We learn style specific attributes from the PERSONALITY-CAPTIONS dataset.
arXiv Detail & Related papers (2022-10-07T00:09:22Z) - Leveraging Natural Supervision for Language Representation Learning and
Generation [8.083109555490475]
We describe three lines of work that seek to improve the training and evaluation of neural models using naturally-occurring supervision.
We first investigate self-supervised training losses to help enhance the performance of pretrained language models for various NLP tasks.
We propose a framework that uses paraphrase pairs to disentangle semantics and syntax in sentence representations.
arXiv Detail & Related papers (2022-07-21T17:26:03Z) - An Attention-Based Model for Predicting Contextual Informativeness and
Curriculum Learning Applications [11.775048147405725]
We develop models for estimating contextual informativeness, focusing on the instructional aspect of sentences.
We show how our model identifies key contextual elements in a sentence that are likely to contribute most to a reader's understanding of the target word.
We believe our results open new possibilities for applications that support language learning for both human and machine learners.
arXiv Detail & Related papers (2022-04-21T05:17:49Z) - Pedagogical Word Recommendation: A novel task and dataset on
personalized vocabulary acquisition for L2 learners [4.507860128918788]
We propose and release data for a novel task called Pedagogical Word Recommendation.
The main goal of PWR is to predict whether a given learner knows a given word based on other words the learner has already seen.
As a feature of this ITS, students can directly indicate words they do not know from the questions they solved to create wordbooks.
arXiv Detail & Related papers (2021-12-27T17:52:48Z) - Latin writing styles analysis with Machine Learning: New approach to old
questions [0.0]
In the Middle Ages texts were learned by heart and spread using oral means of communication from generation to generation.
Taking into account such a specific construction of literature composed in Latin, we can search for and indicate the probability patterns of familiar sources of specific narrative texts.
arXiv Detail & Related papers (2021-09-01T20:21:45Z) - VidLanKD: Improving Language Understanding via Video-Distilled Knowledge
Transfer [76.3906723777229]
We present VidLanKD, a video-language knowledge distillation method for improving language understanding.
We train a multi-modal teacher model on a video-text dataset, and then transfer its knowledge to a student language model with a text dataset.
In our experiments, VidLanKD achieves consistent improvements over text-only language models and vokenization models.
arXiv Detail & Related papers (2021-07-06T15:41:32Z) - Watch and Learn: Mapping Language and Noisy Real-world Videos with
Self-supervision [54.73758942064708]
We teach machines to understand visuals and natural language by learning the mapping between sentences and noisy video snippets without explicit annotations.
For training and evaluation, we contribute a new dataset ApartmenTour' that contains a large number of online videos and subtitles.
arXiv Detail & Related papers (2020-11-19T03:43:56Z) - On Vocabulary Reliance in Scene Text Recognition [79.21737876442253]
Methods perform well on images with words within vocabulary but generalize poorly to images with words outside vocabulary.
We call this phenomenon "vocabulary reliance"
We propose a simple yet effective mutual learning strategy to allow models of two families to learn collaboratively.
arXiv Detail & Related papers (2020-05-08T11:16:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.