Adaptive and Personalized Exercise Generation for Online Language
Learning
- URL: http://arxiv.org/abs/2306.02457v1
- Date: Sun, 4 Jun 2023 20:18:40 GMT
- Title: Adaptive and Personalized Exercise Generation for Online Language
Learning
- Authors: Peng Cui, Mrinmaya Sachan
- Abstract summary: We study a novel task of adaptive and personalized exercise generation for online language learning.
We combine a knowledge tracing model that estimates each student's evolving knowledge states from their learning history.
We train and evaluate our model on real-world learner interaction data from Duolingo.
- Score: 39.28263461783446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adaptive learning aims to provide customized educational activities (e.g.,
exercises) to address individual learning needs. However, manual construction
and delivery of such activities is a laborious process. Thus, in this paper, we
study a novel task of adaptive and personalized exercise generation for online
language learning. To this end, we combine a knowledge tracing model that
estimates each student's evolving knowledge states from their learning history
and a controlled text generation model that generates exercise sentences based
on the student's current estimated knowledge state and instructor requirements
of desired properties (e.g., domain knowledge and difficulty). We train and
evaluate our model on real-world learner interaction data from Duolingo and
demonstrate that LMs guided by student states can generate superior exercises.
Then, we discuss the potential use of our model in educational applications
using various simulations. These simulations show that our model can adapt to
students' individual abilities and can facilitate their learning efficiency by
personalizing learning sequences.
Related papers
- One Size doesn't Fit All: A Personalized Conversational Tutoring Agent for Mathematics Instruction [23.0134120158482]
We propose a textbfPersontextbfAlized textbfConversational tutoring agtextbfEnt (PACE) for mathematics instruction.
PACE simulates students' learning styles based on the Felder and Silverman learning style model, aligning with each student's persona.
To further enhance students' comprehension, PACE employs the Socratic teaching method to provide instant feedback and encourage deep thinking.
arXiv Detail & Related papers (2025-02-18T08:24:52Z) - Dynamic Skill Adaptation for Large Language Models [78.31322532135272]
We present Dynamic Skill Adaptation (DSA), an adaptive and dynamic framework to adapt novel and complex skills to Large Language Models (LLMs)
For every skill, we utilize LLMs to generate both textbook-like data which contains detailed descriptions of skills for pre-training and exercise-like data which targets at explicitly utilizing the skills to solve problems for instruction-tuning.
Experiments on large language models such as LLAMA and Mistral demonstrate the effectiveness of our proposed methods in adapting math reasoning skills and social study skills.
arXiv Detail & Related papers (2024-12-26T22:04:23Z) - Toward In-Context Teaching: Adapting Examples to Students' Misconceptions [54.82965010592045]
We introduce a suite of models and evaluation methods we call AdapT.
AToM is a new probabilistic model for adaptive teaching that jointly infers students' past beliefs and optimize for the correctness of future beliefs.
Our results highlight both the difficulty of the adaptive teaching task and the potential of learned adaptive models for solving it.
arXiv Detail & Related papers (2024-05-07T17:05:27Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Towards Scalable Adaptive Learning with Graph Neural Networks and
Reinforcement Learning [0.0]
We introduce a flexible and scalable approach towards the problem of learning path personalization.
Our model is a sequential recommender system based on a graph neural network.
Our results demonstrate that it can learn to make good recommendations in the small-data regime.
arXiv Detail & Related papers (2023-05-10T18:16:04Z) - What Artificial Neural Networks Can Tell Us About Human Language
Acquisition [47.761188531404066]
Rapid progress in machine learning for natural language processing has the potential to transform debates about how humans learn language.
To increase the relevance of learnability results from computational models, we need to train model learners without significant advantages over humans.
arXiv Detail & Related papers (2022-08-17T00:12:37Z) - Curriculum learning for language modeling [2.2475845406292714]
Language models have proven transformational for the natural language processing community.
These models have proven expensive, energy-intensive, and challenging to train.
Curriculum learning is a method that employs a structured training regime instead.
arXiv Detail & Related papers (2021-08-04T16:53:43Z) - Self-Paced Learning for Neural Machine Translation [55.41314278859938]
We propose self-paced learning for neural machine translation (NMT) training.
We show that the proposed model yields better performance than strong baselines.
arXiv Detail & Related papers (2020-10-09T11:33:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.