Differentiating Student Feedbacks for Knowledge Tracing
- URL: http://arxiv.org/abs/2212.14695v1
- Date: Fri, 16 Dec 2022 13:55:07 GMT
- Title: Differentiating Student Feedbacks for Knowledge Tracing
- Authors: Jiajun Cui, Wei Zhang
- Abstract summary: We propose DR4KT for Knowledge Tracing, which reweights the contribution of different responses according to their discrimination in training.
For retaining high prediction accuracy on low discriminative responses after reweighting, DR4KT also introduces a discrimination-aware score fusion technique.
- Score: 5.176190855174938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In computer-aided education and intelligent tutoring systems, knowledge
tracing (KT) raises attention due to the development of data-driven learning
methods, which aims to predict students' future performance given their past
question response sequences to trace their knowledge states. However, current
deep learning approaches only focus on enhancing prediction accuracy, but
neglecting the discrimination imbalance of responses. That is, a considerable
proportion of question responses are weak to discriminate students' knowledge
states, but equally considered compared to other discriminative responses, thus
hurting the ability of tracing students' personalized knowledge states. To
tackle this issue, we propose DR4KT for Knowledge Tracing, which reweights the
contribution of different responses according to their discrimination in
training. For retaining high prediction accuracy on low discriminative
responses after reweighting, DR4KT also introduces a discrimination-aware score
fusion technique to make a proper combination between student knowledge mastery
and the questions themselves. Comprehensive experimental results show that our
DR4KT applied on four mainstream KT methods significantly improves their
performance on three widely-used datasets.
Related papers
- RILe: Reinforced Imitation Learning [60.63173816209543]
Adversarial variants of Imitation Learning and Inverse Reinforcement Learning offer an alternative by learning policies from expert demonstrations via a discriminator.
We propose RILe, a teacher-student system that achieves both robustness to imperfect data and efficiency.
arXiv Detail & Related papers (2024-06-12T17:56:31Z) - Explainable Few-shot Knowledge Tracing [48.877979333221326]
We propose a cognition-guided framework that can track the student knowledge from a few student records while providing natural language explanations.
Experimental results from three widely used datasets show that LLMs can perform comparable or superior to competitive deep knowledge tracing methods.
arXiv Detail & Related papers (2024-05-23T10:07:21Z) - Interpretable Knowledge Tracing via Response Influence-based Counterfactual Reasoning [10.80973695116047]
Knowledge tracing plays a crucial role in computer-aided education and intelligent tutoring systems.
Current approaches have explored psychological influences to achieve more explainable predictions.
We propose RCKT, a novel response influence-based counterfactual knowledge tracing framework.
arXiv Detail & Related papers (2023-12-01T11:27:08Z) - Do We Fully Understand Students' Knowledge States? Identifying and
Mitigating Answer Bias in Knowledge Tracing [12.31363929361146]
Knowledge tracing aims to monitor students' evolving knowledge states through their learning interactions with concept-related questions.
There is a common phenomenon of answer bias, i.e., a highly unbalanced distribution of correct and incorrect answers for each question.
Existing models tend to memorize the answer bias as a shortcut for achieving high prediction performance in KT.
arXiv Detail & Related papers (2023-08-15T13:56:29Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Mind Your Outliers! Investigating the Negative Impact of Outliers on
Active Learning for Visual Question Answering [71.15403434929915]
We show that across 5 models and 4 datasets on the task of visual question answering, a wide variety of active learning approaches fail to outperform random selection.
We identify the problem as collective outliers -- groups of examples that active learning methods prefer to acquire but models fail to learn.
We show that active learning sample efficiency increases significantly as the number of collective outliers in the active learning pool decreases.
arXiv Detail & Related papers (2021-07-06T00:52:11Z) - Option Tracing: Beyond Correctness Analysis in Knowledge Tracing [3.1798318618973362]
We extend existing knowledge tracing methods to predict the exact option students select in multiple choice questions.
We quantitatively evaluate the performance of our option tracing methods on two large-scale student response datasets.
arXiv Detail & Related papers (2021-04-19T04:28:34Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.