Multi-Factors Aware Dual-Attentional Knowledge Tracing
- URL: http://arxiv.org/abs/2108.04741v1
- Date: Tue, 10 Aug 2021 15:03:28 GMT
- Title: Multi-Factors Aware Dual-Attentional Knowledge Tracing
- Authors: Moyu Zhang (1), Xinning Zhu (1), Chunhong Zhang (1), Yang Ji (1), Feng
Pan (1), Changchuan Yin (1) ((1) Beijing University of Posts and
Telecommunications)
- Abstract summary: We propose Multi-Factors Aware Dual-Attentional model (MF-DAKT) to model students' learning progress.
To enrich questions representations, we use a pre-training method to incorporate two kinds of question information.
We also apply a dual-attentional mechanism to differentiate contributions of factors and factor interactions to final prediction.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing demands of personalized learning, knowledge tracing has
become important which traces students' knowledge states based on their
historical practices. Factor analysis methods mainly use two kinds of factors
which are separately related to students and questions to model students'
knowledge states. These methods use the total number of attempts of students to
model students' learning progress and hardly highlight the impact of the most
recent relevant practices. Besides, current factor analysis methods ignore rich
information contained in questions. In this paper, we propose Multi-Factors
Aware Dual-Attentional model (MF-DAKT) which enriches question representations
and utilizes multiple factors to model students' learning progress based on a
dual-attentional mechanism. More specifically, we propose a novel
student-related factor which records the most recent attempts on relevant
concepts of students to highlight the impact of recent exercises. To enrich
questions representations, we use a pre-training method to incorporate two
kinds of question information including questions' relation and difficulty
level. We also add a regularization term about questions' difficulty level to
restrict pre-trained question representations to fine-tuning during the process
of predicting students' performance. Moreover, we apply a dual-attentional
mechanism to differentiate contributions of factors and factor interactions to
final prediction in different practice records. At last, we conduct experiments
on several real-world datasets and results show that MF-DAKT can outperform
existing knowledge tracing methods. We also conduct several studies to validate
the effects of each component of MF-DAKT.
Related papers
- SINKT: A Structure-Aware Inductive Knowledge Tracing Model with Large Language Model [64.92472567841105]
Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question.
Structure-aware Inductive Knowledge Tracing model with large language model (dubbed SINKT)
SINKT predicts the student's response to the target question by interacting with the student's knowledge state and the question representation.
arXiv Detail & Related papers (2024-07-01T12:44:52Z) - A Question-centric Multi-experts Contrastive Learning Framework for Improving the Accuracy and Interpretability of Deep Sequential Knowledge Tracing Models [26.294808618068146]
Knowledge tracing plays a crucial role in predicting students' future performance.
Deep neural networks (DNNs) have shown great potential in solving the KT problem.
However, there still exist some important challenges when applying deep learning techniques to model the KT process.
arXiv Detail & Related papers (2024-03-12T05:15:42Z) - Continual Learning with Pre-Trained Models: A Survey [61.97613090666247]
Continual Learning aims to overcome the catastrophic forgetting of former knowledge when learning new ones.
This paper presents a comprehensive survey of the latest advancements in PTM-based CL.
arXiv Detail & Related papers (2024-01-29T18:27:52Z) - Coarse-to-Fine Knowledge-Enhanced Multi-Interest Learning Framework for
Multi-Behavior Recommendation [52.89816309759537]
Multi-types of behaviors (e.g., clicking, adding to cart, purchasing, etc.) widely exist in most real-world recommendation scenarios.
The state-of-the-art multi-behavior models learn behavior dependencies indistinguishably with all historical interactions as input.
We propose a novel Coarse-to-fine Knowledge-enhanced Multi-interest Learning framework to learn shared and behavior-specific interests for different behaviors.
arXiv Detail & Related papers (2022-08-03T05:28:14Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Learning Disentangled Representations for Counterfactual Regression via
Mutual Information Minimization [25.864029391642422]
We propose Disentangled Representations for Counterfactual Regression via Mutual Information Minimization (MIM-DRCFR)
We use a multi-task learning framework to share information when learning the latent factors and incorporates MI minimization learning criteria to ensure the independence of these factors.
Experiments including public benchmarks and real-world industrial user growth datasets demonstrate that our method performs much better than state-of-the-art methods.
arXiv Detail & Related papers (2022-06-02T12:49:41Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - Exploring Bayesian Deep Learning for Urgent Instructor Intervention Need
in MOOC Forums [58.221459787471254]
Massive Open Online Courses (MOOCs) have become a popular choice for e-learning thanks to their great flexibility.
Due to large numbers of learners and their diverse backgrounds, it is taxing to offer real-time support.
With the large volume of posts and high workloads for MOOC instructors, it is unlikely that the instructors can identify all learners requiring intervention.
This paper explores for the first time Bayesian deep learning on learner-based text posts with two methods: Monte Carlo Dropout and Variational Inference.
arXiv Detail & Related papers (2021-04-26T15:12:13Z) - RKT : Relation-Aware Self-Attention for Knowledge Tracing [2.9778695679660188]
We propose a novel Relation-aware self-attention model for Knowledge Tracing (RKT)
We introduce a relation-aware self-attention layer that incorporates the contextual information.
Our model outperforms state-of-the-art knowledge tracing methods.
arXiv Detail & Related papers (2020-08-28T16:47:03Z) - Context-Aware Attentive Knowledge Tracing [21.397976659857793]
We propose attentive knowledge tracing, which couples flexible attention-based neural network models with a series of novel, interpretable model components.
AKT uses a novel monotonic attention mechanism that relates a learner's future responses to assessment questions to their past responses.
We show that AKT outperforms existing KT methods (by up to $6%$ in AUC in some cases) on predicting future learner responses.
arXiv Detail & Related papers (2020-07-24T02:45:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.