Improving Question Embeddings with Cognitive Representation Optimization for Knowledge Tracing
- URL: http://arxiv.org/abs/2504.04121v2
- Date: Fri, 25 Jul 2025 05:26:24 GMT
- Title: Improving Question Embeddings with Cognitive Representation Optimization for Knowledge Tracing
- Authors: Lixiang Xu, Xianwei Ding, Xin Yuan, Zhanlong Wang, Lu Bai, Enhong Chen, Philip S. Yu, Yuanyan Tang,
- Abstract summary: Research on KT modeling focuses on predicting future student performance based on existing, unupdated records of student learning interactions.<n>We propose a knowledge-tracking cognitive representation optimization (CRO-KT) model that uses dynamic programming algorithms to optimize the structure of cognitive representation.
- Score: 77.14348157016518
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Designed to track changes in students' knowledge status and predict their future answers based on students' historical answer records. Current research on KT modeling focuses on predicting future student performance based on existing, unupdated records of student learning interactions. However, these methods ignore distractions in the response process (such as slipping and guessing) and ignore that static cognitive representations are temporary and limited. Most of them assume that there are no distractions during the answering process, and that the recorded representation fully represents the student's understanding and proficiency in knowledge. This can lead to many dissonant and uncoordinated issues in the original record. Therefore, we propose a knowledge-tracking cognitive representation optimization (CRO-KT) model that uses dynamic programming algorithms to optimize the structure of cognitive representation. This ensures that the structure matches the student's cognitive patterns in terms of practice difficulty. In addition, we use a synergistic optimization algorithm to optimize the cognitive representation of sub-target exercises based on the overall picture of exercise responses by considering all exercises with synergistic relationships as one goal. At the same time, the CRO-KT model integrates the relationship embedding learned in the dichotomous graph with the optimized record representation in a weighted manner, which enhances students' cognitive expression ability. Finally, experiments were conducted on three public datasets to verify the effectiveness of the proposed cognitive representation optimization model.
Related papers
- Efficient Machine Unlearning via Influence Approximation [75.31015485113993]
Influence-based unlearning has emerged as a prominent approach to estimate the impact of individual training samples on model parameters without retraining.<n>This paper establishes a theoretical link between memorizing (incremental learning) and forgetting (unlearning)<n>We introduce the Influence Approximation Unlearning algorithm for efficient machine unlearning from the incremental perspective.
arXiv Detail & Related papers (2025-07-31T05:34:27Z) - Dynamic Programming Techniques for Enhancing Cognitive Representation in Knowledge Tracing [125.75923987618977]
We propose the Cognitive Representation Dynamic Programming based Knowledge Tracing (CRDP-KT) model.<n>It is a dynamic programming algorithm to optimize cognitive representations based on the difficulty of the questions and the performance intervals between them.<n>It provides more accurate and systematic input features for subsequent model training, thereby minimizing distortion in the simulation of cognitive states.
arXiv Detail & Related papers (2025-06-03T14:44:48Z) - AAKT: Enhancing Knowledge Tracing with Alternate Autoregressive Modeling [23.247238358162157]
Knowledge Tracing aims to predict students' future performances based on their former exercises and additional information in educational settings.<n>One of the primary challenges in autoregressive modeling for Knowledge Tracing is effectively representing the anterior (pre-response) and posterior (post-response) states of learners across exercises.<n>We propose a novel perspective on knowledge tracing task by treating it as a generative process, consistent with the principles of autoregressive models.
arXiv Detail & Related papers (2025-02-17T14:09:51Z) - DASKT: A Dynamic Affect Simulation Method for Knowledge Tracing [51.665582274736785]
Knowledge Tracing (KT) predicts future performance by students' historical computation, and understanding students' affective states can enhance the effectiveness of KT.<n>We propose Affect Dynamic Knowledge Tracing (DASKT) to explore the impact of various student affective states on their knowledge states.<n>Our research highlights a promising avenue for future studies, focusing on achieving high interpretability and accuracy.
arXiv Detail & Related papers (2025-01-18T10:02:10Z) - Temporal Graph Memory Networks For Knowledge Tracing [0.40964539027092906]
We propose a novel method that jointly models the relational and temporal dynamics of the knowledge state using a deep temporal graph memory network.
We also propose a generic technique for representing a student's forgetting behavior using temporal decay constraints on the graph memory module.
arXiv Detail & Related papers (2024-09-23T07:47:02Z) - Enhancing Graph Contrastive Learning with Reliable and Informative Augmentation for Recommendation [84.45144851024257]
We propose a novel framework that aims to enhance graph contrastive learning by constructing contrastive views with stronger collaborative information via discrete codes.<n>The core idea is to map users and items into discrete codes rich in collaborative information for reliable and informative contrastive view generation.
arXiv Detail & Related papers (2024-09-09T14:04:17Z) - Enhancing Knowledge Tracing with Concept Map and Response Disentanglement [5.201585012263761]
We propose the Concept map-driven Response disentanglement method for enhancing Knowledge Tracing (CRKT) model.
CRKT benefits KT by directly leveraging answer choices--beyond merely identifying correct or incorrect answers--to distinguish responses with different incorrect choices.
We further introduce the novel use of unchosen responses by employing disentangled representations to get insights from options not selected by students.
arXiv Detail & Related papers (2024-08-23T11:25:56Z) - Differentiating Student Feedbacks for Knowledge Tracing [28.669001606806525]
We propose a framework to reweight the contribution of different responses based on their discrimination in training.<n>We also introduce an adaptive predictive score fusion technique to maintain accuracy on less discriminative responses.
arXiv Detail & Related papers (2022-12-16T13:55:07Z) - Understanding Self-Predictive Learning for Reinforcement Learning [61.62067048348786]
We study the learning dynamics of self-predictive learning for reinforcement learning.
We propose a novel self-predictive algorithm that learns two representations simultaneously.
arXiv Detail & Related papers (2022-12-06T20:43:37Z) - DyTed: Disentangled Representation Learning for Discrete-time Dynamic
Graph [59.583555454424]
We propose a novel disenTangled representation learning framework for discrete-time Dynamic graphs, namely DyTed.
We specially design a temporal-clips contrastive learning task together with a structure contrastive learning to effectively identify the time-invariant and time-varying representations respectively.
arXiv Detail & Related papers (2022-10-19T14:34:12Z) - Self-Regulated Learning for Egocentric Video Activity Anticipation [147.9783215348252]
Self-Regulated Learning (SRL) aims to regulate the intermediate representation consecutively to produce representation that emphasizes the novel information in the frame of the current time-stamp.
SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets.
arXiv Detail & Related papers (2021-11-23T03:29:18Z) - An Empirical Comparison of Deep Learning Models for Knowledge Tracing on
Large-Scale Dataset [10.329254031835953]
Knowledge tracing is a problem of modeling each student's mastery of knowledge concepts.
Recent release of large-scale student performance dataset citechoi 2019ednet motivates the analysis of performance of deep learning approaches.
arXiv Detail & Related papers (2021-01-16T04:58:17Z) - Memory-augmented Dense Predictive Coding for Video Representation
Learning [103.69904379356413]
We propose a new architecture and learning framework Memory-augmented Predictive Coding (MemDPC) for the task.
We investigate visual-only self-supervised video representation learning from RGB frames, or from unsupervised optical flow, or both.
In all cases, we demonstrate state-of-the-art or comparable performance over other approaches with orders of magnitude fewer training data.
arXiv Detail & Related papers (2020-08-03T17:57:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.