Disentangled Knowledge Tracing for Alleviating Cognitive Bias
- URL: http://arxiv.org/abs/2503.02539v1
- Date: Tue, 04 Mar 2025 12:04:13 GMT
- Title: Disentangled Knowledge Tracing for Alleviating Cognitive Bias
- Authors: Yiyun Zhou, Zheqi Lv, Shengyu Zhang, Jingyuan Chen,
- Abstract summary: We propose a Disentangled Knowledge Tracing model, which models students' familiar and unfamiliar abilities based on causal effects.<n>DisKT significantly alleviates cognitive bias and outperforms 16 baselines in evaluation accuracy.
- Score: 7.145106976584109
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the realm of Intelligent Tutoring System (ITS), the accurate assessment of students' knowledge states through Knowledge Tracing (KT) is crucial for personalized learning. However, due to data bias, $\textit{i.e.}$, the unbalanced distribution of question groups ($\textit{e.g.}$, concepts), conventional KT models are plagued by cognitive bias, which tends to result in cognitive underload for overperformers and cognitive overload for underperformers. More seriously, this bias is amplified with the exercise recommendations by ITS. After delving into the causal relations in the KT models, we identify the main cause as the confounder effect of students' historical correct rate distribution over question groups on the student representation and prediction score. Towards this end, we propose a Disentangled Knowledge Tracing (DisKT) model, which separately models students' familiar and unfamiliar abilities based on causal effects and eliminates the impact of the confounder in student representation within the model. Additionally, to shield the contradictory psychology ($\textit{e.g.}$, guessing and mistaking) in the students' biased data, DisKT introduces a contradiction attention mechanism. Furthermore, DisKT enhances the interpretability of the model predictions by integrating a variant of Item Response Theory. Experimental results on 11 benchmarks and 3 synthesized datasets with different bias strengths demonstrate that DisKT significantly alleviates cognitive bias and outperforms 16 baselines in evaluation accuracy.
Related papers
- BiasConnect: Investigating Bias Interactions in Text-to-Image Models [73.76853483463836]
We introduce BiasConnect, a novel tool designed to analyze and quantify bias interactions in Text-to-Image models.
Our method provides empirical estimates that indicate how other bias dimensions shift toward or away from an ideal distribution when a given bias is modified.
We demonstrate the utility of BiasConnect for selecting optimal bias mitigation axes, comparing different TTI models on the dependencies they learn, and understanding the amplification of intersectional societal biases in TTI models.
arXiv Detail & Related papers (2025-03-12T19:01:41Z) - DASKT: A Dynamic Affect Simulation Method for Knowledge Tracing [51.665582274736785]
Knowledge Tracing (KT) predicts future performance by students' historical computation, and understanding students' affective states can enhance the effectiveness of KT.<n>We propose Affect Dynamic Knowledge Tracing (DASKT) to explore the impact of various student affective states on their knowledge states.<n>Our research highlights a promising avenue for future studies, focusing on achieving high interpretability and accuracy.
arXiv Detail & Related papers (2025-01-18T10:02:10Z) - Personalized Knowledge Tracing through Student Representation Reconstruction and Class Imbalance Mitigation [32.52262417461651]
Knowledge tracing is a technique that predicts students' future performance by analyzing their learning process.
Recent studies have achieved significant progress by leveraging powerful deep neural networks.
We propose PKT, a novel approach for personalized knowledge tracing.
arXiv Detail & Related papers (2024-09-10T07:02:46Z) - Do We Fully Understand Students' Knowledge States? Identifying and
Mitigating Answer Bias in Knowledge Tracing [12.31363929361146]
Knowledge tracing aims to monitor students' evolving knowledge states through their learning interactions with concept-related questions.
There is a common phenomenon of answer bias, i.e., a highly unbalanced distribution of correct and incorrect answers for each question.
Existing models tend to memorize the answer bias as a shortcut for achieving high prediction performance in KT.
arXiv Detail & Related papers (2023-08-15T13:56:29Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Differentiating Student Feedbacks for Knowledge Tracing [28.669001606806525]
We propose a framework to reweight the contribution of different responses based on their discrimination in training.<n>We also introduce an adaptive predictive score fusion technique to maintain accuracy on less discriminative responses.
arXiv Detail & Related papers (2022-12-16T13:55:07Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Interpretable Knowledge Tracing: Simple and Efficient Student Modeling
with Causal Relations [21.74631969428855]
Interpretable Knowledge Tracing (IKT) is a simple model that relies on three meaningful latent features.
IKT's prediction of future student performance is made using a Tree-Augmented Naive Bayes (TAN)
IKT has great potential for providing adaptive and personalized instructions with causal reasoning in real-world educational systems.
arXiv Detail & Related papers (2021-12-15T19:05:48Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - BKT-LSTM: Efficient Student Modeling for knowledge tracing and student
performance prediction [0.24366811507669117]
We propose an efficient student model called BKT-LSTM.
It contains three meaningful components: individual textitskill mastery assessed by BKT, textitability profile (learning transfer across skills) detected by k-means clustering and textitproblem difficulty.
arXiv Detail & Related papers (2020-12-22T18:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.