Code-DKT: A Code-based Knowledge Tracing Model for Programming Tasks
- URL: http://arxiv.org/abs/2206.03545v1
- Date: Tue, 7 Jun 2022 19:29:44 GMT
- Title: Code-DKT: A Code-based Knowledge Tracing Model for Programming Tasks
- Authors: Yang Shi, Min Chi, Tiffany Barnes, Thomas Price
- Abstract summary: We propose Code-based Deep Knowledge Tracing (Code-DKT), a model that uses an attention mechanism to automatically extract and select domain-specific code features to extend DKT.
We compared the effectiveness of Code-DKT against Bayesian and Deep Knowledge Tracing (BKT and DKT) on a dataset from a class of 50 students attempting to solve 5 programming assignments.
- Score: 10.474382290378049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge tracing (KT) models are a popular approach for predicting students'
future performance at practice problems using their prior attempts. Though many
innovations have been made in KT, most models including the state-of-the-art
Deep KT (DKT) mainly leverage each student's response either as correct or
incorrect, ignoring its content. In this work, we propose Code-based Deep
Knowledge Tracing (Code-DKT), a model that uses an attention mechanism to
automatically extract and select domain-specific code features to extend DKT.
We compared the effectiveness of Code-DKT against Bayesian and Deep Knowledge
Tracing (BKT and DKT) on a dataset from a class of 50 students attempting to
solve 5 introductory programming assignments. Our results show that Code-DKT
consistently outperforms DKT by 3.07-4.00% AUC across the 5 assignments, a
comparable improvement to other state-of-the-art domain-general KT models over
DKT. Finally, we analyze problem-specific performance through a set of case
studies for one assignment to demonstrate when and how code features improve
Code-DKT's predictions.
Related papers
- Automated Knowledge Concept Annotation and Question Representation Learning for Knowledge Tracing [59.480951050911436]
We present KCQRL, a framework for automated knowledge concept annotation and question representation learning.
We demonstrate the effectiveness of KCQRL across 15 KT algorithms on two large real-world Math learning datasets.
arXiv Detail & Related papers (2024-10-02T16:37:19Z) - SIaM: Self-Improving Code-Assisted Mathematical Reasoning of Large Language Models [54.78329741186446]
We propose a novel paradigm that uses a code-based critic model to guide steps including question-code data construction, quality control, and complementary evaluation.
Experiments across both in-domain and out-of-domain benchmarks in English and Chinese demonstrate the effectiveness of the proposed paradigm.
arXiv Detail & Related papers (2024-08-28T06:33:03Z) - Language Model Can Do Knowledge Tracing: Simple but Effective Method to Integrate Language Model and Knowledge Tracing Task [3.1459398432526267]
This paper proposes Language model-based Knowledge Tracing (LKT), a novel framework that integrates pre-trained language models (PLMs) with Knowledge Tracing methods.
LKT effectively incorporates textual information and significantly outperforms previous KT models on large benchmark datasets.
arXiv Detail & Related papers (2024-06-05T03:26:59Z) - Improving Low-Resource Knowledge Tracing Tasks by Supervised Pre-training and Importance Mechanism Fine-tuning [25.566963415155325]
We propose a low-resource KT framework called LoReKT to address above challenges.
Inspired by the prevalent "pre-training and fine-tuning" paradigm, we aim to learn transferable parameters and representations from rich-resource KT datasets.
We design an encoding mechanism to incorporate student interactions from multiple KT data sources.
arXiv Detail & Related papers (2024-03-11T13:44:43Z) - pyKT: A Python Library to Benchmark Deep Learning based Knowledge
Tracing Models [46.05383477261115]
Knowledge tracing (KT) is the task of using students' historical learning interaction data to model their knowledge mastery over time.
DLKT approaches are still left somewhat unknown and proper measurement and analysis of these approaches remain a challenge.
We introduce a comprehensive python based benchmark platform, textscpyKT, to guarantee valid comparisons across DLKT methods.
arXiv Detail & Related papers (2022-06-23T02:42:47Z) - Enhancing Knowledge Tracing via Adversarial Training [5.461665809706664]
We study the problem of knowledge tracing (KT) where the goal is to trace the students' knowledge mastery over time.
Recent advances on KT have increasingly concentrated on exploring deep neural networks (DNNs) to improve the performance of KT.
We propose an efficient AT based KT method (ATKT) to enhance KT model's generalization and thus push the limit of KT.
arXiv Detail & Related papers (2021-08-10T03:35:13Z) - A Survey of Knowledge Tracing: Models, Variants, and Applications [70.69281873057619]
Knowledge Tracing is one of the fundamental tasks for student behavioral data analysis.
We present three types of fundamental KT models with distinct technical routes.
We discuss potential directions for future research in this rapidly growing field.
arXiv Detail & Related papers (2021-05-06T13:05:55Z) - Consistency and Monotonicity Regularization for Neural Knowledge Tracing [50.92661409499299]
Knowledge Tracing (KT) tracking a human's knowledge acquisition is a central component in online learning and AI in Education.
We propose three types of novel data augmentation, coined replacement, insertion, and deletion, along with corresponding regularization losses.
Extensive experiments on various KT benchmarks show that our regularization scheme consistently improves the model performances.
arXiv Detail & Related papers (2021-05-03T02:36:29Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - qDKT: Question-centric Deep Knowledge Tracing [29.431121650577396]
We introduce qDKT, a variant of DKT that models every learner's success probability on individual questions over time.
qDKT incorporates graph Laplacian regularization to smooth predictions under each skill.
Experiments on several real-world datasets show that qDKT achieves state-of-art performance on predicting learner outcomes.
arXiv Detail & Related papers (2020-05-25T23:43:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.