Improving Interpretability of Deep Sequential Knowledge Tracing Models
with Question-centric Cognitive Representations
- URL: http://arxiv.org/abs/2302.06885v1
- Date: Tue, 14 Feb 2023 08:14:30 GMT
- Title: Improving Interpretability of Deep Sequential Knowledge Tracing Models
with Question-centric Cognitive Representations
- Authors: Jiahao Chen, Zitao Liu, Shuyan Huang, Qiongqiong Liu, Weiqi Luo
- Abstract summary: We present QIKT, a question-centric interpretable KT model to address the above challenges.
The proposed QIKT approach explicitly models students' knowledge state variations at a fine-grained level.
It outperforms a wide range of deep learning based KT models in terms of prediction accuracy with better model interpretability.
- Score: 22.055683237994696
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge tracing (KT) is a crucial technique to predict students' future
performance by observing their historical learning processes. Due to the
powerful representation ability of deep neural networks, remarkable progress
has been made by using deep learning techniques to solve the KT problem. The
majority of existing approaches rely on the \emph{homogeneous question}
assumption that questions have equivalent contributions if they share the same
set of knowledge components. Unfortunately, this assumption is inaccurate in
real-world educational scenarios. Furthermore, it is very challenging to
interpret the prediction results from the existing deep learning based KT
models. Therefore, in this paper, we present QIKT, a question-centric
interpretable KT model to address the above challenges. The proposed QIKT
approach explicitly models students' knowledge state variations at a
fine-grained level with question-sensitive cognitive representations that are
jointly learned from a question-centric knowledge acquisition module and a
question-centric problem solving module. Meanwhile, the QIKT utilizes an item
response theory based prediction layer to generate interpretable prediction
results. The proposed QIKT model is evaluated on three public real-world
educational datasets. The results demonstrate that our approach is superior on
the KT prediction task, and it outperforms a wide range of deep learning based
KT models in terms of prediction accuracy with better model interpretability.
To encourage reproducible results, we have provided all the datasets and code
at \url{https://pykt.org/}.
Related papers
- Automated Knowledge Concept Annotation and Question Representation Learning for Knowledge Tracing [59.480951050911436]
We present KCQRL, a framework for automated knowledge concept annotation and question representation learning.
We demonstrate the effectiveness of KCQRL across 15 KT algorithms on two large real-world Math learning datasets.
arXiv Detail & Related papers (2024-10-02T16:37:19Z) - SINKT: A Structure-Aware Inductive Knowledge Tracing Model with Large Language Model [64.92472567841105]
Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question.
Structure-aware Inductive Knowledge Tracing model with large language model (dubbed SINKT)
SINKT predicts the student's response to the target question by interacting with the student's knowledge state and the question representation.
arXiv Detail & Related papers (2024-07-01T12:44:52Z) - A Question-centric Multi-experts Contrastive Learning Framework for Improving the Accuracy and Interpretability of Deep Sequential Knowledge Tracing Models [26.294808618068146]
Knowledge tracing plays a crucial role in predicting students' future performance.
Deep neural networks (DNNs) have shown great potential in solving the KT problem.
However, there still exist some important challenges when applying deep learning techniques to model the KT process.
arXiv Detail & Related papers (2024-03-12T05:15:42Z) - Enhancing Deep Knowledge Tracing with Auxiliary Tasks [24.780533765606922]
We propose emphAT-DKT to improve the prediction performance of the original deep knowledge tracing model.
We conduct comprehensive experiments on three real-world educational datasets and compare the proposed approach to both deep sequential KT models and non-sequential models.
arXiv Detail & Related papers (2023-02-14T08:21:37Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z) - Deep Knowledge Tracing with Learning Curves [0.9088303226909278]
We propose a Convolution-Augmented Knowledge Tracing (CAKT) model in this paper.
The model employs three-dimensional convolutional neural networks to explicitly learn a student's recent experience on applying the same knowledge concept with that in the next question.
CAKT achieves the new state-of-the-art performance in predicting students' responses compared with existing models.
arXiv Detail & Related papers (2020-07-26T15:24:51Z) - Context-Aware Attentive Knowledge Tracing [21.397976659857793]
We propose attentive knowledge tracing, which couples flexible attention-based neural network models with a series of novel, interpretable model components.
AKT uses a novel monotonic attention mechanism that relates a learner's future responses to assessment questions to their past responses.
We show that AKT outperforms existing KT methods (by up to $6%$ in AUC in some cases) on predicting future learner responses.
arXiv Detail & Related papers (2020-07-24T02:45:43Z) - qDKT: Question-centric Deep Knowledge Tracing [29.431121650577396]
We introduce qDKT, a variant of DKT that models every learner's success probability on individual questions over time.
qDKT incorporates graph Laplacian regularization to smooth predictions under each skill.
Experiments on several real-world datasets show that qDKT achieves state-of-art performance on predicting learner outcomes.
arXiv Detail & Related papers (2020-05-25T23:43:55Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.