HiTSKT: A Hierarchical Transformer Model for Session-Aware Knowledge
Tracing
- URL: http://arxiv.org/abs/2212.12139v3
- Date: Tue, 6 Jun 2023 13:05:01 GMT
- Title: HiTSKT: A Hierarchical Transformer Model for Session-Aware Knowledge
Tracing
- Authors: Fucai Ke, Weiqing Wang, Weicong Tan, Lan Du, Yuan Jin, Yujin Huang and
Hongzhi Yin
- Abstract summary: Knowledge tracing (KT) aims to leverage students' learning histories to estimate their mastery levels on a set of pre-defined skills, based on which the corresponding future performance can be accurately predicted.
In practice, a student's learning history comprises answers to sets of massed questions, each known as a session, rather than merely being a sequence of independent answers.
Most existing KT models treat student's learning records as a single continuing sequence, without capturing the sessional shift of students' knowledge state.
- Score: 35.02243127325724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge tracing (KT) aims to leverage students' learning histories to
estimate their mastery levels on a set of pre-defined skills, based on which
the corresponding future performance can be accurately predicted. As an
important way of providing personalized experience for online education, KT has
gained increased attention in recent years. In practice, a student's learning
history comprises answers to sets of massed questions, each known as a session,
rather than merely being a sequence of independent answers. Theoretically,
within and across these sessions, students' learning dynamics can be very
different. Therefore, how to effectively model the dynamics of students'
knowledge states within and across the sessions is crucial for handling the KT
problem. Most existing KT models treat student's learning records as a single
continuing sequence, without capturing the sessional shift of students'
knowledge state. To address the above issue, we propose a novel hierarchical
transformer model, named HiTSKT, comprises an interaction(-level) encoder to
capture the knowledge a student acquires within a session, and a
session(-level) encoder to summarise acquired knowledge across the past
sessions. To predict an interaction in the current session, a knowledge
retriever integrates the summarised past-session knowledge with the previous
interactions' information into proper knowledge representations. These
representations are then used to compute the student's current knowledge state.
Additionally, to model the student's long-term forgetting behaviour across the
sessions, a power-law-decay attention mechanism is designed and deployed in the
session encoder, allowing it to emphasize more on the recent sessions.
Extensive experiments on three public datasets demonstrate that HiTSKT achieves
new state-of-the-art performance on all the datasets compared with six
state-of-the-art KT models.
Related papers
- Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning [70.64617500380287]
Continual learning allows models to learn from new data while retaining previously learned knowledge.
The semantic knowledge available in the label information of the images, offers important semantic information that can be related with previously acquired knowledge of semantic classes.
We propose integrating semantic guidance within and across tasks by capturing semantic similarity using text embeddings.
arXiv Detail & Related papers (2024-08-02T07:51:44Z) - SINKT: A Structure-Aware Inductive Knowledge Tracing Model with Large Language Model [64.92472567841105]
Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question.
Structure-aware Inductive Knowledge Tracing model with large language model (dubbed SINKT)
SINKT predicts the student's response to the target question by interacting with the student's knowledge state and the question representation.
arXiv Detail & Related papers (2024-07-01T12:44:52Z) - A Question-centric Multi-experts Contrastive Learning Framework for Improving the Accuracy and Interpretability of Deep Sequential Knowledge Tracing Models [26.294808618068146]
Knowledge tracing plays a crucial role in predicting students' future performance.
Deep neural networks (DNNs) have shown great potential in solving the KT problem.
However, there still exist some important challenges when applying deep learning techniques to model the KT process.
arXiv Detail & Related papers (2024-03-12T05:15:42Z) - CTP: Towards Vision-Language Continual Pretraining via Compatible
Momentum Contrast and Topology Preservation [128.00940554196976]
Vision-Language Continual Pretraining (VLCP) has shown impressive results on diverse downstream tasks by offline training on large-scale datasets.
To support the study of Vision-Language Continual Pretraining (VLCP), we first contribute a comprehensive and unified benchmark dataset P9D.
The data from each industry as an independent task supports continual learning and conforms to the real-world long-tail nature to simulate pretraining on web data.
arXiv Detail & Related papers (2023-08-14T13:53:18Z) - Quiz-based Knowledge Tracing [61.9152637457605]
Knowledge tracing aims to assess individuals' evolving knowledge states according to their learning interactions.
QKT achieves state-of-the-art performance compared to existing methods.
arXiv Detail & Related papers (2023-04-05T12:48:42Z) - DKT-STDRL: Spatial and Temporal Representation Learning Enhanced Deep
Knowledge Tracing for Learning Performance Prediction [11.75131482747055]
The DKT-STDRL model uses CNN to extract the spatial feature information of students' exercise sequences.
The BiLSTM part extracts the temporal features from the joint learning features to obtain the prediction information of whether the students answer correctly at the next time step.
Experiments on the public education datasets ASSISTment2009, ASSISTment2015, Synthetic-5, ASSISTchall, and Statics2011 prove that DKT-STDRL can achieve better prediction effects than DKT and CKT.
arXiv Detail & Related papers (2023-02-15T09:23:21Z) - Transition-Aware Multi-Activity Knowledge Tracing [2.9778695679660188]
Knowledge tracing aims to model student knowledge state given the student's sequence of learning activities.
Current KT solutions are not fit for modeling student learning from non-assessed learning activities.
We propose Transition-Aware Multi-activity Knowledge Tracing (TAMKOT)
arXiv Detail & Related papers (2023-01-26T21:49:24Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - LANA: Towards Personalized Deep Knowledge Tracing Through
Distinguishable Interactive Sequences [21.67751919579854]
We propose Leveled Attentive KNowledge TrAcing (LANA) to predict students' responses to future questions.
It uses a novel student-related features extractor (SRFE) to distill students' unique inherent properties from their respective interactive sequences.
With pivot module reconstructed the decoder for individual students and leveled learning specialized encoders for groups, personalized DKT was achieved.
arXiv Detail & Related papers (2021-04-21T02:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.