Broccoli: Sprinkling Lightweight Vocabulary Learning into Everyday
Information Diets
- URL: http://arxiv.org/abs/2104.07941v1
- Date: Fri, 16 Apr 2021 07:38:05 GMT
- Title: Broccoli: Sprinkling Lightweight Vocabulary Learning into Everyday
Information Diets
- Authors: Roland Aydin, Lars Klein, Arnaud Miribel, Robert West
- Abstract summary: Broccoli is a new paradigm aimed at reducing the required effort by seamlessly embedding vocabulary learning into users' everyday information diets.
We find that the efficacy of the lightweight Broccoli approach is competitive with traditional, memorization-based vocabulary learning.
- Score: 3.305377595864778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The learning of a new language remains to this date a cognitive task that
requires considerable diligence and willpower, recent advances and tools
notwithstanding. In this paper, we propose Broccoli, a new paradigm aimed at
reducing the required effort by seamlessly embedding vocabulary learning into
users' everyday information diets. This is achieved by inconspicuously
switching chosen words encountered by the user for their translation in the
target language. Thus, by seeing words in context, the user can assimilate new
vocabulary without much conscious effort. We validate our approach in a careful
user study, finding that the efficacy of the lightweight Broccoli approach is
competitive with traditional, memorization-based vocabulary learning. The low
cognitive overhead is manifested in a pronounced decrease in learners' usage of
mnemonic learning strategies, as compared to traditional learning. Finally, we
establish that language patterns in typical information diets are compatible
with spaced-repetition strategies, thus enabling an efficient use of the
Broccoli paradigm. Overall, our work establishes the feasibility of a novel and
powerful "install-and-forget" approach for embedded language acquisition.
Related papers
- Exploring Automated Keyword Mnemonics Generation with Large Language Models via Overgenerate-and-Rank [4.383205675898942]
Keywords mnemonics are a technique for memorizing vocabulary through memorable associations with a target word via a verbal cue.
We propose a novel overgenerate-and-rank method via prompting large language models to generate verbal cues.
Results show that LLM-generated mnemonics are comparable to human-generated ones in terms of imageability, coherence, and perceived usefulness.
arXiv Detail & Related papers (2024-09-21T00:00:18Z) - CoLLEGe: Concept Embedding Generation for Large Language Models [12.812113254812028]
CoLLEGe is a meta-learning framework capable of generating flexible embeddings for new concepts.
We design a series of tasks to test new concept learning in challenging real-world scenarios.
arXiv Detail & Related papers (2024-03-22T17:26:05Z) - Storyfier: Exploring Vocabulary Learning Support with Text Generation
Models [52.58844741797822]
We develop Storyfier to provide a coherent context for any target words of learners' interests.
learners generally favor the generated stories for connecting target words and writing assistance for easing their learning workload.
In read-cloze-write learning sessions, participants using Storyfier perform worse in recalling and using target words than learning with a baseline tool without our AI features.
arXiv Detail & Related papers (2023-08-07T18:25:00Z) - Human Inspired Progressive Alignment and Comparative Learning for
Grounded Word Acquisition [6.47452771256903]
We take inspiration from how human babies acquire their first language, and developed a computational process for word acquisition through comparative learning.
Motivated by cognitive findings, we generated a small dataset that enables the computation models to compare the similarities and differences of various attributes.
We frame the acquisition of words as not only the information filtration process, but also as representation-symbol mapping.
arXiv Detail & Related papers (2023-07-05T19:38:04Z) - Retentive or Forgetful? Diving into the Knowledge Memorizing Mechanism
of Language Models [49.39276272693035]
Large-scale pre-trained language models have shown remarkable memorizing ability.
Vanilla neural networks without pre-training have been long observed suffering from the catastrophic forgetting problem.
We find that 1) Vanilla language models are forgetful; 2) Pre-training leads to retentive language models; 3) Knowledge relevance and diversification significantly influence the memory formation.
arXiv Detail & Related papers (2023-05-16T03:50:38Z) - SmartPhone: Exploring Keyword Mnemonic with Auto-generated Verbal and
Visual Cues [2.8047215329139976]
We propose an end-to-end pipeline for auto-generating verbal and visual cues for keyword mnemonics.
Our approach, an end-to-end pipeline for auto-generating verbal and visual cues, can automatically generate highly memorable cues.
arXiv Detail & Related papers (2023-05-11T20:58:10Z) - Semi-Supervised Lifelong Language Learning [81.0685290973989]
We explore a novel setting, semi-supervised lifelong language learning (SSLL), where a model learns sequentially arriving language tasks with both labeled and unlabeled data.
Specially, we dedicate task-specific modules to alleviate catastrophic forgetting and design two modules to exploit unlabeled data.
Experimental results on various language tasks demonstrate our model's effectiveness and superiority over competitive baselines.
arXiv Detail & Related papers (2022-11-23T15:51:33Z) - On the Efficiency of Integrating Self-supervised Learning and
Meta-learning for User-defined Few-shot Keyword Spotting [51.41426141283203]
User-defined keyword spotting is a task to detect new spoken terms defined by users.
Previous works try to incorporate self-supervised learning models or apply meta-learning algorithms.
Our result shows that HuBERT combined with Matching network achieves the best result.
arXiv Detail & Related papers (2022-04-01T10:59:39Z) - Short-Term Word-Learning in a Dynamically Changing Environment [63.025297637716534]
We show how to supplement an end-to-end ASR system with a word/phrase memory and a mechanism to access this memory to recognize the words and phrases correctly.
We demonstrate significant improvements in the detection rate of new words with only a minor increase in false alarms.
arXiv Detail & Related papers (2022-03-29T10:05:39Z) - Learning Adaptive Language Interfaces through Decomposition [89.21937539950966]
We introduce a neural semantic parsing system that learns new high-level abstractions through decomposition.
Users interactively teach the system by breaking down high-level utterances describing novel behavior into low-level steps.
arXiv Detail & Related papers (2020-10-11T08:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.