BKT-LSTM: Efficient Student Modeling for knowledge tracing and student
performance prediction
- URL: http://arxiv.org/abs/2012.12218v2
- Date: Wed, 6 Jan 2021 03:46:09 GMT
- Title: BKT-LSTM: Efficient Student Modeling for knowledge tracing and student
performance prediction
- Authors: Sein Minn
- Abstract summary: We propose an efficient student model called BKT-LSTM.
It contains three meaningful components: individual textitskill mastery assessed by BKT, textitability profile (learning transfer across skills) detected by k-means clustering and textitproblem difficulty.
- Score: 0.24366811507669117
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, we have seen a rapid rise in usage of online educational platforms.
The personalized education became crucially important in future learning
environments. Knowledge tracing (KT) refers to the detection of students'
knowledge states and predict future performance given their past outcomes for
providing adaptive solution to Intelligent Tutoring Systems (ITS). Bayesian
Knowledge Tracing (BKT) is a model to capture mastery level of each skill with
psychologically meaningful parameters and widely used in successful tutoring
systems. However, it is unable to detect learning transfer across skills
because each skill model is learned independently and shows lower efficiency in
student performance prediction. While recent KT models based on deep neural
networks shows impressive predictive power but it came with a price. Ten of
thousands of parameters in neural networks are unable to provide
psychologically meaningful interpretation that reflect to cognitive theory. In
this paper, we proposed an efficient student model called BKT-LSTM. It contains
three meaningful components: individual \textit{skill mastery} assessed by BKT,
\textit{ability profile} (learning transfer across skills) detected by k-means
clustering and \textit{problem difficulty}. All these components are taken into
account in student's future performance prediction by leveraging predictive
power of LSTM. BKT-LSTM outperforms state-of-the-art student models in
student's performance prediction by considering these meaningful features
instead of using binary values of student's past interaction in DKT. We also
conduct ablation studies on each of BKT-LSTM model components to examine their
value and each component shows significant contribution in student's
performance prediction. Thus, it has potential for providing adaptive and
personalized instruction in real-world educational systems.
Related papers
- Personalized Knowledge Tracing through Student Representation Reconstruction and Class Imbalance Mitigation [32.52262417461651]
Knowledge tracing is a technique that predicts students' future performance by analyzing their learning process.
Recent studies have achieved significant progress by leveraging powerful deep neural networks.
We propose PKT, a novel approach for personalized knowledge tracing.
arXiv Detail & Related papers (2024-09-10T07:02:46Z) - SINKT: A Structure-Aware Inductive Knowledge Tracing Model with Large Language Model [64.92472567841105]
Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question.
Structure-aware Inductive Knowledge Tracing model with large language model (dubbed SINKT)
SINKT predicts the student's response to the target question by interacting with the student's knowledge state and the question representation.
arXiv Detail & Related papers (2024-07-01T12:44:52Z) - What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights [67.72413262980272]
Severe data imbalance naturally exists among web-scale vision-language datasets.
We find CLIP pre-trained thereupon exhibits notable robustness to the data imbalance compared to supervised learning.
The robustness and discriminability of CLIP improve with more descriptive language supervision, larger data scale, and broader open-world concepts.
arXiv Detail & Related papers (2024-05-31T17:57:24Z) - DETAIL: Task DEmonsTration Attribution for Interpretable In-context Learning [75.68193159293425]
In-context learning (ICL) allows transformer-based language models to learn a specific task with a few "task demonstrations" without updating their parameters.
We propose an influence function-based attribution technique, DETAIL, that addresses the specific characteristics of ICL.
We experimentally prove the wide applicability of DETAIL by showing our attribution scores obtained on white-box models are transferable to black-box models in improving model performance.
arXiv Detail & Related papers (2024-05-22T15:52:52Z) - A Question-centric Multi-experts Contrastive Learning Framework for Improving the Accuracy and Interpretability of Deep Sequential Knowledge Tracing Models [26.294808618068146]
Knowledge tracing plays a crucial role in predicting students' future performance.
Deep neural networks (DNNs) have shown great potential in solving the KT problem.
However, there still exist some important challenges when applying deep learning techniques to model the KT process.
arXiv Detail & Related papers (2024-03-12T05:15:42Z) - DKT-STDRL: Spatial and Temporal Representation Learning Enhanced Deep
Knowledge Tracing for Learning Performance Prediction [11.75131482747055]
The DKT-STDRL model uses CNN to extract the spatial feature information of students' exercise sequences.
The BiLSTM part extracts the temporal features from the joint learning features to obtain the prediction information of whether the students answer correctly at the next time step.
Experiments on the public education datasets ASSISTment2009, ASSISTment2015, Synthetic-5, ASSISTchall, and Statics2011 prove that DKT-STDRL can achieve better prediction effects than DKT and CKT.
arXiv Detail & Related papers (2023-02-15T09:23:21Z) - Interpretable Knowledge Tracing: Simple and Efficient Student Modeling
with Causal Relations [21.74631969428855]
Interpretable Knowledge Tracing (IKT) is a simple model that relies on three meaningful latent features.
IKT's prediction of future student performance is made using a Tree-Augmented Naive Bayes (TAN)
IKT has great potential for providing adaptive and personalized instructions with causal reasoning in real-world educational systems.
arXiv Detail & Related papers (2021-12-15T19:05:48Z) - Learning by Distillation: A Self-Supervised Learning Framework for
Optical Flow Estimation [71.76008290101214]
DistillFlow is a knowledge distillation approach to learning optical flow.
It achieves state-of-the-art unsupervised learning performance on both KITTI and Sintel datasets.
Our models ranked 1st among all monocular methods on the KITTI 2015 benchmark, and outperform all published methods on the Sintel Final benchmark.
arXiv Detail & Related papers (2021-06-08T09:13:34Z) - Context-Aware Attentive Knowledge Tracing [21.397976659857793]
We propose attentive knowledge tracing, which couples flexible attention-based neural network models with a series of novel, interpretable model components.
AKT uses a novel monotonic attention mechanism that relates a learner's future responses to assessment questions to their past responses.
We show that AKT outperforms existing KT methods (by up to $6%$ in AUC in some cases) on predicting future learner responses.
arXiv Detail & Related papers (2020-07-24T02:45:43Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z) - Efficient Crowd Counting via Structured Knowledge Transfer [122.30417437707759]
Crowd counting is an application-oriented task and its inference efficiency is crucial for real-world applications.
We propose a novel Structured Knowledge Transfer framework to generate a lightweight but still highly effective student network.
Our models obtain at least 6.5$times$ speed-up on an Nvidia 1080 GPU and even achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-03-23T08:05:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.