An Empirical Comparison of Deep Learning Models for Knowledge Tracing on
Large-Scale Dataset
- URL: http://arxiv.org/abs/2101.06373v1
- Date: Sat, 16 Jan 2021 04:58:17 GMT
- Title: An Empirical Comparison of Deep Learning Models for Knowledge Tracing on
Large-Scale Dataset
- Authors: Shalini Pandey, George Karypis, Jaideep Srivastava
- Abstract summary: Knowledge tracing is a problem of modeling each student's mastery of knowledge concepts.
Recent release of large-scale student performance dataset citechoi 2019ednet motivates the analysis of performance of deep learning approaches.
- Score: 10.329254031835953
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Knowledge tracing (KT) is the problem of modeling each student's mastery of
knowledge concepts (KCs) as (s)he engages with a sequence of learning
activities. It is an active research area to help provide learners with
personalized feedback and materials. Various deep learning techniques have been
proposed for solving KT. Recent release of large-scale student performance
dataset \cite{choi2019ednet} motivates the analysis of performance of deep
learning approaches that have been proposed to solve KT. Our analysis can help
understand which method to adopt when large dataset related to student
performance is available. We also show that incorporating contextual
information such as relation between exercises and student forget behavior
further improves the performance of deep learning models.
Related papers
- KBAlign: Efficient Self Adaptation on Specific Knowledge Bases [75.78948575957081]
Large language models (LLMs) usually rely on retrieval-augmented generation to exploit knowledge materials in an instant manner.
We propose KBAlign, an approach designed for efficient adaptation to downstream tasks involving knowledge bases.
Our method utilizes iterative training with self-annotated data such as Q&A pairs and revision suggestions, enabling the model to grasp the knowledge content efficiently.
arXiv Detail & Related papers (2024-11-22T08:21:03Z) - Enhancing Deep Knowledge Tracing via Diffusion Models for Personalized Adaptive Learning [1.2248793682283963]
This study aims to tackle data shortage issues in student learning records to enhance DKT performance for personalized adaptive learning (PAL)
It employs TabDDPM, a diffusion model, to generate synthetic educational records to augment training data for enhancing DKT.
The experimental results demonstrate that the AI-generated data by TabDDPM significantly improves DKT performance.
arXiv Detail & Related papers (2024-04-25T00:23:20Z) - A Question-centric Multi-experts Contrastive Learning Framework for Improving the Accuracy and Interpretability of Deep Sequential Knowledge Tracing Models [26.294808618068146]
Knowledge tracing plays a crucial role in predicting students' future performance.
Deep neural networks (DNNs) have shown great potential in solving the KT problem.
However, there still exist some important challenges when applying deep learning techniques to model the KT process.
arXiv Detail & Related papers (2024-03-12T05:15:42Z) - Advancing Deep Active Learning & Data Subset Selection: Unifying
Principles with Information-Theory Intuitions [3.0539022029583953]
This thesis aims to enhance the practicality of deep learning by improving the label and training efficiency of deep learning models.
We investigate data subset selection techniques, specifically active learning and active sampling, grounded in information-theoretic principles.
arXiv Detail & Related papers (2024-01-09T01:41:36Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Interpreting Deep Knowledge Tracing Model on EdNet Dataset [67.81797777936868]
In this work, we perform the similar tasks but on a large and newly available dataset, called EdNet.
The preliminary experiment results show the effectiveness of the interpreting techniques.
arXiv Detail & Related papers (2021-10-31T07:18:59Z) - Deep Graph Memory Networks for Forgetting-Robust Knowledge Tracing [5.648636668261282]
We propose a novel knowledge tracing model, namely emphDeep Graph Memory Network (DGMN)
In this model, we incorporate a forget gating mechanism into an attention memory structure in order to capture forgetting behaviours.
This model has the capability of learning relationships between latent concepts from a dynamic latent concept graph.
arXiv Detail & Related papers (2021-08-18T12:04:10Z) - On the Interpretability of Deep Learning Based Models for Knowledge
Tracing [5.120837730908589]
Knowledge tracing allows Intelligent Tutoring Systems to infer which topics or skills a student has mastered.
Deep Learning based models like Deep Knowledge Tracing (DKT) and Dynamic Key-Value Memory Network (DKVMN) have achieved significant improvements.
However, these deep learning based models are not as interpretable as other models because the decision-making process learned by deep neural networks is not wholly understood.
arXiv Detail & Related papers (2021-01-27T11:55:03Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.