Do We Fully Understand Students' Knowledge States? Identifying and
Mitigating Answer Bias in Knowledge Tracing
- URL: http://arxiv.org/abs/2308.07779v2
- Date: Sat, 9 Dec 2023 03:01:33 GMT
- Title: Do We Fully Understand Students' Knowledge States? Identifying and
Mitigating Answer Bias in Knowledge Tracing
- Authors: Chaoran Cui, Hebo Ma, Chen Zhang, Chunyun Zhang, Yumo Yao, Meng Chen,
Yuling Ma
- Abstract summary: Knowledge tracing aims to monitor students' evolving knowledge states through their learning interactions with concept-related questions.
There is a common phenomenon of answer bias, i.e., a highly unbalanced distribution of correct and incorrect answers for each question.
Existing models tend to memorize the answer bias as a shortcut for achieving high prediction performance in KT.
- Score: 12.31363929361146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge tracing (KT) aims to monitor students' evolving knowledge states
through their learning interactions with concept-related questions, and can be
indirectly evaluated by predicting how students will perform on future
questions. In this paper, we observe that there is a common phenomenon of
answer bias, i.e., a highly unbalanced distribution of correct and incorrect
answers for each question. Existing models tend to memorize the answer bias as
a shortcut for achieving high prediction performance in KT, thereby failing to
fully understand students' knowledge states. To address this issue, we approach
the KT task from a causality perspective. A causal graph of KT is first
established, from which we identify that the impact of answer bias lies in the
direct causal effect of questions on students' responses. A novel
COunterfactual REasoning (CORE) framework for KT is further proposed, which
separately captures the total causal effect and direct causal effect during
training, and mitigates answer bias by subtracting the latter from the former
in testing. The CORE framework is applicable to various existing KT models, and
we implement it based on the prevailing DKT, DKVMN, and AKT models,
respectively. Extensive experiments on three benchmark datasets demonstrate the
effectiveness of CORE in making the debiased inference for KT. We have released
our code at https://github.com/lucky7-code/CORE.
Related papers
- Automated Knowledge Concept Annotation and Question Representation Learning for Knowledge Tracing [59.480951050911436]
We present KCQRL, a framework for automated knowledge concept annotation and question representation learning.
We demonstrate the effectiveness of KCQRL across 15 KT algorithms on two large real-world Math learning datasets.
arXiv Detail & Related papers (2024-10-02T16:37:19Z) - Enhancing Knowledge Tracing with Concept Map and Response Disentanglement [5.201585012263761]
We propose the Concept map-driven Response disentanglement method for enhancing Knowledge Tracing (CRKT) model.
CRKT benefits KT by directly leveraging answer choices--beyond merely identifying correct or incorrect answers--to distinguish responses with different incorrect choices.
We further introduce the novel use of unchosen responses by employing disentangled representations to get insights from options not selected by students.
arXiv Detail & Related papers (2024-08-23T11:25:56Z) - SINKT: A Structure-Aware Inductive Knowledge Tracing Model with Large Language Model [64.92472567841105]
Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question.
Structure-aware Inductive Knowledge Tracing model with large language model (dubbed SINKT)
SINKT predicts the student's response to the target question by interacting with the student's knowledge state and the question representation.
arXiv Detail & Related papers (2024-07-01T12:44:52Z) - Forgetting-aware Linear Bias for Attentive Knowledge Tracing [7.87348193562399]
This paper proposes Forgetting-aware Linear Bias (FoLiBi) to reflect forgetting behavior as a linear bias.
FoLiBi plugged with several KT models yields a consistent improvement of up to 2.58% in AUC over state-of-the-art KT models on four benchmark datasets.
arXiv Detail & Related papers (2023-09-26T09:48:30Z) - Towards Debiasing Frame Length Bias in Text-Video Retrieval via Causal
Intervention [72.12974259966592]
We present a unique and systematic study of a temporal bias due to frame length discrepancy between training and test sets of trimmed video clips.
We propose a causal debiasing approach and perform extensive experiments and ablation studies on the Epic-Kitchens-100, YouCook2, and MSR-VTT datasets.
arXiv Detail & Related papers (2023-09-17T15:58:27Z) - Quiz-based Knowledge Tracing [61.9152637457605]
Knowledge tracing aims to assess individuals' evolving knowledge states according to their learning interactions.
QKT achieves state-of-the-art performance compared to existing methods.
arXiv Detail & Related papers (2023-04-05T12:48:42Z) - Enhancing Deep Knowledge Tracing with Auxiliary Tasks [24.780533765606922]
We propose emphAT-DKT to improve the prediction performance of the original deep knowledge tracing model.
We conduct comprehensive experiments on three real-world educational datasets and compare the proposed approach to both deep sequential KT models and non-sequential models.
arXiv Detail & Related papers (2023-02-14T08:21:37Z) - On student-teacher deviations in distillation: does it pay to disobey? [54.908344098305804]
Knowledge distillation has been widely used to improve the test accuracy of a "student" network.
Despite being trained to fit the teacher's probabilities, the student may not only significantly deviate from the teacher probabilities, but may also outdo the teacher in performance.
arXiv Detail & Related papers (2023-01-30T14:25:02Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - GIKT: A Graph-based Interaction Model for Knowledge Tracing [36.07642261246016]
We propose a Graph-based Interaction model for Knowledge Tracing (GIKT) to tackle the above probems.
More specifically, GIKT utilizes graph convolutional network (GCN) to substantially incorporate question-skill correlations.
Experiments on three datasets demonstrate that GIKT achieves the new state-of-the-art performance, with at least 1% absolute AUC improvement.
arXiv Detail & Related papers (2020-09-13T12:50:32Z) - qDKT: Question-centric Deep Knowledge Tracing [29.431121650577396]
We introduce qDKT, a variant of DKT that models every learner's success probability on individual questions over time.
qDKT incorporates graph Laplacian regularization to smooth predictions under each skill.
Experiments on several real-world datasets show that qDKT achieves state-of-art performance on predicting learner outcomes.
arXiv Detail & Related papers (2020-05-25T23:43:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.