Adaptive and Personalized Exercise Generation for Online Language
Learning
- URL: http://arxiv.org/abs/2306.02457v1
- Date: Sun, 4 Jun 2023 20:18:40 GMT
- Title: Adaptive and Personalized Exercise Generation for Online Language
Learning
- Authors: Peng Cui, Mrinmaya Sachan
- Abstract summary: We study a novel task of adaptive and personalized exercise generation for online language learning.
We combine a knowledge tracing model that estimates each student's evolving knowledge states from their learning history.
We train and evaluate our model on real-world learner interaction data from Duolingo.
- Score: 39.28263461783446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adaptive learning aims to provide customized educational activities (e.g.,
exercises) to address individual learning needs. However, manual construction
and delivery of such activities is a laborious process. Thus, in this paper, we
study a novel task of adaptive and personalized exercise generation for online
language learning. To this end, we combine a knowledge tracing model that
estimates each student's evolving knowledge states from their learning history
and a controlled text generation model that generates exercise sentences based
on the student's current estimated knowledge state and instructor requirements
of desired properties (e.g., domain knowledge and difficulty). We train and
evaluate our model on real-world learner interaction data from Duolingo and
demonstrate that LMs guided by student states can generate superior exercises.
Then, we discuss the potential use of our model in educational applications
using various simulations. These simulations show that our model can adapt to
students' individual abilities and can facilitate their learning efficiency by
personalizing learning sequences.
Related papers
- Babysit A Language Model From Scratch: Interactive Language Learning by Trials and Demonstrations [15.394018604836774]
We introduce a trial-and-demonstration (TnD) learning framework that incorporates three components: student trials, teacher demonstrations, and a reward conditioned on language competence.
Our experiments reveal that the TnD approach accelerates word acquisition for student models of equal or smaller numbers of parameters.
Our findings suggest that interactive language learning, with teacher demonstrations and student trials, can facilitate efficient word learning in language models.
arXiv Detail & Related papers (2024-05-22T16:57:02Z) - Toward In-Context Teaching: Adapting Examples to Students' Misconceptions [54.82965010592045]
We introduce a suite of models and evaluation methods we call AdapT.
AToM is a new probabilistic model for adaptive teaching that jointly infers students' past beliefs and optimize for the correctness of future beliefs.
Our results highlight both the difficulty of the adaptive teaching task and the potential of learned adaptive models for solving it.
arXiv Detail & Related papers (2024-05-07T17:05:27Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Towards Scalable Adaptive Learning with Graph Neural Networks and
Reinforcement Learning [0.0]
We introduce a flexible and scalable approach towards the problem of learning path personalization.
Our model is a sequential recommender system based on a graph neural network.
Our results demonstrate that it can learn to make good recommendations in the small-data regime.
arXiv Detail & Related papers (2023-05-10T18:16:04Z) - Opportunities and Challenges in Neural Dialog Tutoring [54.07241332881601]
We rigorously analyze various generative language models on two dialog tutoring datasets for language learning.
We find that although current approaches can model tutoring in constrained learning scenarios, they perform poorly in less constrained scenarios.
Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring.
arXiv Detail & Related papers (2023-01-24T11:00:17Z) - What Artificial Neural Networks Can Tell Us About Human Language
Acquisition [47.761188531404066]
Rapid progress in machine learning for natural language processing has the potential to transform debates about how humans learn language.
To increase the relevance of learnability results from computational models, we need to train model learners without significant advantages over humans.
arXiv Detail & Related papers (2022-08-17T00:12:37Z) - Curriculum learning for language modeling [2.2475845406292714]
Language models have proven transformational for the natural language processing community.
These models have proven expensive, energy-intensive, and challenging to train.
Curriculum learning is a method that employs a structured training regime instead.
arXiv Detail & Related papers (2021-08-04T16:53:43Z) - Self-Paced Learning for Neural Machine Translation [55.41314278859938]
We propose self-paced learning for neural machine translation (NMT) training.
We show that the proposed model yields better performance than strong baselines.
arXiv Detail & Related papers (2020-10-09T11:33:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.