Enhancing Knowledge Tracing with Concept Map and Response Disentanglement
- URL: http://arxiv.org/abs/2408.12996v1
- Date: Fri, 23 Aug 2024 11:25:56 GMT
- Title: Enhancing Knowledge Tracing with Concept Map and Response Disentanglement
- Authors: Soonwook Park, Donghoon Lee, Hogun Park,
- Abstract summary: We propose the Concept map-driven Response disentanglement method for enhancing Knowledge Tracing (CRKT) model.
CRKT benefits KT by directly leveraging answer choices--beyond merely identifying correct or incorrect answers--to distinguish responses with different incorrect choices.
We further introduce the novel use of unchosen responses by employing disentangled representations to get insights from options not selected by students.
- Score: 5.201585012263761
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the rapidly advancing realm of educational technology, it becomes critical to accurately trace and understand student knowledge states. Conventional Knowledge Tracing (KT) models have mainly focused on binary responses (i.e., correct and incorrect answers) to questions. Unfortunately, they largely overlook the essential information in students' actual answer choices, particularly for Multiple Choice Questions (MCQs), which could help reveal each learner's misconceptions or knowledge gaps. To tackle these challenges, we propose the Concept map-driven Response disentanglement method for enhancing Knowledge Tracing (CRKT) model. CRKT benefits KT by directly leveraging answer choices--beyond merely identifying correct or incorrect answers--to distinguish responses with different incorrect choices. We further introduce the novel use of unchosen responses by employing disentangled representations to get insights from options not selected by students. Additionally, CRKT tracks the student's knowledge state at the concept level and encodes the concept map, representing the relationships between them, to better predict unseen concepts. This approach is expected to provide actionable feedback, improving the learning experience. Our comprehensive experiments across multiple datasets demonstrate CRKT's effectiveness, achieving superior performance in prediction accuracy and interpretability over state-of-the-art models.
Related papers
- Automated Knowledge Concept Annotation and Question Representation Learning for Knowledge Tracing [59.480951050911436]
We present KCQRL, a framework for automated knowledge concept annotation and question representation learning.
We demonstrate the effectiveness of KCQRL across 15 KT algorithms on two large real-world Math learning datasets.
arXiv Detail & Related papers (2024-10-02T16:37:19Z) - SINKT: A Structure-Aware Inductive Knowledge Tracing Model with Large Language Model [64.92472567841105]
Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question.
Structure-aware Inductive Knowledge Tracing model with large language model (dubbed SINKT)
SINKT predicts the student's response to the target question by interacting with the student's knowledge state and the question representation.
arXiv Detail & Related papers (2024-07-01T12:44:52Z) - Interpretable Knowledge Tracing via Response Influence-based Counterfactual Reasoning [10.80973695116047]
Knowledge tracing plays a crucial role in computer-aided education and intelligent tutoring systems.
Current approaches have explored psychological influences to achieve more explainable predictions.
We propose RCKT, a novel response influence-based counterfactual knowledge tracing framework.
arXiv Detail & Related papers (2023-12-01T11:27:08Z) - Do We Fully Understand Students' Knowledge States? Identifying and
Mitigating Answer Bias in Knowledge Tracing [12.31363929361146]
Knowledge tracing aims to monitor students' evolving knowledge states through their learning interactions with concept-related questions.
There is a common phenomenon of answer bias, i.e., a highly unbalanced distribution of correct and incorrect answers for each question.
Existing models tend to memorize the answer bias as a shortcut for achieving high prediction performance in KT.
arXiv Detail & Related papers (2023-08-15T13:56:29Z) - Distinguish Before Answer: Generating Contrastive Explanation as
Knowledge for Commonsense Question Answering [61.53454387743701]
We propose CPACE, a concept-centric Prompt-bAsed Contrastive Explanation Generation model.
CPACE converts obtained symbolic knowledge into a contrastive explanation for better distinguishing the differences among given candidates.
We conduct a series of experiments on three widely-used question-answering datasets: CSQA, QASC, and OBQA.
arXiv Detail & Related papers (2023-05-14T12:12:24Z) - Quiz-based Knowledge Tracing [61.9152637457605]
Knowledge tracing aims to assess individuals' evolving knowledge states according to their learning interactions.
QKT achieves state-of-the-art performance compared to existing methods.
arXiv Detail & Related papers (2023-04-05T12:48:42Z) - Differentiating Student Feedbacks for Knowledge Tracing [5.176190855174938]
We propose DR4KT for Knowledge Tracing, which reweights the contribution of different responses according to their discrimination in training.
For retaining high prediction accuracy on low discriminative responses after reweighting, DR4KT also introduces a discrimination-aware score fusion technique.
arXiv Detail & Related papers (2022-12-16T13:55:07Z) - Learning Knowledge Representation with Meta Knowledge Distillation for
Single Image Super-Resolution [82.89021683451432]
We propose a model-agnostic meta knowledge distillation method under the teacher-student architecture for the single image super-resolution task.
Experiments conducted on various single image super-resolution datasets demonstrate that our proposed method outperforms existing defined knowledge representation related distillation methods.
arXiv Detail & Related papers (2022-07-18T02:41:04Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - DISSECT: Disentangled Simultaneous Explanations via Concept Traversals [33.65478845353047]
DISSECT is a novel approach to explaining deep learning model inferences.
By training a generative model from a classifier's signal, DISSECT offers a way to discover a classifier's inherent "notion" of distinct concepts.
We show that DISSECT produces CTs that disentangle several concepts and are coupled to its reasoning due to joint training.
arXiv Detail & Related papers (2021-05-31T17:11:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.