qDKT: Question-centric Deep Knowledge Tracing
- URL: http://arxiv.org/abs/2005.12442v1
- Date: Mon, 25 May 2020 23:43:55 GMT
- Title: qDKT: Question-centric Deep Knowledge Tracing
- Authors: Shashank Sonkar, Andrew E. Waters, Andrew S. Lan, Phillip J. Grimaldi,
Richard G. Baraniuk
- Abstract summary: We introduce qDKT, a variant of DKT that models every learner's success probability on individual questions over time.
qDKT incorporates graph Laplacian regularization to smooth predictions under each skill.
Experiments on several real-world datasets show that qDKT achieves state-of-art performance on predicting learner outcomes.
- Score: 29.431121650577396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge tracing (KT) models, e.g., the deep knowledge tracing (DKT) model,
track an individual learner's acquisition of skills over time by examining the
learner's performance on questions related to those skills. A practical
limitation in most existing KT models is that all questions nested under a
particular skill are treated as equivalent observations of a learner's ability,
which is an inaccurate assumption in real-world educational scenarios. To
overcome this limitation we introduce qDKT, a variant of DKT that models every
learner's success probability on individual questions over time. First, qDKT
incorporates graph Laplacian regularization to smooth predictions under each
skill, which is particularly useful when the number of questions in the dataset
is big. Second, qDKT uses an initialization scheme inspired by the fastText
algorithm, which has found success in a variety of language modeling tasks. Our
experiments on several real-world datasets show that qDKT achieves state-of-art
performance on predicting learner outcomes. Because of this, qDKT can serve as
a simple, yet tough-to-beat, baseline for new question-centric KT models.
Related papers
- Automated Knowledge Concept Annotation and Question Representation Learning for Knowledge Tracing [59.480951050911436]
We present KCQRL, a framework for automated knowledge concept annotation and question representation learning.
We demonstrate the effectiveness of KCQRL across 15 KT algorithms on two large real-world Math learning datasets.
arXiv Detail & Related papers (2024-10-02T16:37:19Z) - SINKT: A Structure-Aware Inductive Knowledge Tracing Model with Large Language Model [64.92472567841105]
Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question.
Structure-aware Inductive Knowledge Tracing model with large language model (dubbed SINKT)
SINKT predicts the student's response to the target question by interacting with the student's knowledge state and the question representation.
arXiv Detail & Related papers (2024-07-01T12:44:52Z) - Language Model Can Do Knowledge Tracing: Simple but Effective Method to Integrate Language Model and Knowledge Tracing Task [3.1459398432526267]
This paper proposes Language model-based Knowledge Tracing (LKT), a novel framework that integrates pre-trained language models (PLMs) with Knowledge Tracing methods.
LKT effectively incorporates textual information and significantly outperforms previous KT models on large benchmark datasets.
arXiv Detail & Related papers (2024-06-05T03:26:59Z) - A Question-centric Multi-experts Contrastive Learning Framework for Improving the Accuracy and Interpretability of Deep Sequential Knowledge Tracing Models [26.294808618068146]
Knowledge tracing plays a crucial role in predicting students' future performance.
Deep neural networks (DNNs) have shown great potential in solving the KT problem.
However, there still exist some important challenges when applying deep learning techniques to model the KT process.
arXiv Detail & Related papers (2024-03-12T05:15:42Z) - Continual Learning with Pre-Trained Models: A Survey [61.97613090666247]
Continual Learning aims to overcome the catastrophic forgetting of former knowledge when learning new ones.
This paper presents a comprehensive survey of the latest advancements in PTM-based CL.
arXiv Detail & Related papers (2024-01-29T18:27:52Z) - Improving Interpretability of Deep Sequential Knowledge Tracing Models
with Question-centric Cognitive Representations [22.055683237994696]
We present QIKT, a question-centric interpretable KT model to address the above challenges.
The proposed QIKT approach explicitly models students' knowledge state variations at a fine-grained level.
It outperforms a wide range of deep learning based KT models in terms of prediction accuracy with better model interpretability.
arXiv Detail & Related papers (2023-02-14T08:14:30Z) - On Measuring the Intrinsic Few-Shot Hardness of Datasets [49.37562545777455]
We show that few-shot hardness may be intrinsic to datasets, for a given pre-trained model.
We propose a simple and lightweight metric called "Spread" that captures the intuition that few-shot learning is made possible.
Our metric better accounts for few-shot hardness compared to existing notions of hardness, and is 8-100x faster to compute.
arXiv Detail & Related papers (2022-11-16T18:53:52Z) - pyKT: A Python Library to Benchmark Deep Learning based Knowledge
Tracing Models [46.05383477261115]
Knowledge tracing (KT) is the task of using students' historical learning interaction data to model their knowledge mastery over time.
DLKT approaches are still left somewhat unknown and proper measurement and analysis of these approaches remain a challenge.
We introduce a comprehensive python based benchmark platform, textscpyKT, to guarantee valid comparisons across DLKT methods.
arXiv Detail & Related papers (2022-06-23T02:42:47Z) - A Survey of Knowledge Tracing: Models, Variants, and Applications [70.69281873057619]
Knowledge Tracing is one of the fundamental tasks for student behavioral data analysis.
We present three types of fundamental KT models with distinct technical routes.
We discuss potential directions for future research in this rapidly growing field.
arXiv Detail & Related papers (2021-05-06T13:05:55Z) - Context-Aware Attentive Knowledge Tracing [21.397976659857793]
We propose attentive knowledge tracing, which couples flexible attention-based neural network models with a series of novel, interpretable model components.
AKT uses a novel monotonic attention mechanism that relates a learner's future responses to assessment questions to their past responses.
We show that AKT outperforms existing KT methods (by up to $6%$ in AUC in some cases) on predicting future learner responses.
arXiv Detail & Related papers (2020-07-24T02:45:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.