CIKT: A Collaborative and Iterative Knowledge Tracing Framework with Large Language Models
- URL: http://arxiv.org/abs/2505.17705v1
- Date: Fri, 23 May 2025 10:16:16 GMT
- Title: CIKT: A Collaborative and Iterative Knowledge Tracing Framework with Large Language Models
- Authors: Runze Li, Siyu Wu, Jun Wang, Wei Zhang,
- Abstract summary: Knowledge Tracing aims to model a student's learning state over time and predict their future performance.<n>Traditional KT methods often face challenges in explainability, scalability, and effective modeling of complex knowledge dependencies.<n>We propose Collaborative Iterative Knowledge Tracing (CIKT), a framework that harnesses Large Language Models to enhance both prediction accuracy and explainability.
- Score: 14.273311275013057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Tracing (KT) aims to model a student's learning state over time and predict their future performance. However, traditional KT methods often face challenges in explainability, scalability, and effective modeling of complex knowledge dependencies. While Large Language Models (LLMs) present new avenues for KT, their direct application often struggles with generating structured, explainable student representations and lacks mechanisms for continuous, task-specific refinement. To address these gaps, we propose Collaborative Iterative Knowledge Tracing (CIKT), a framework that harnesses LLMs to enhance both prediction accuracy and explainability. CIKT employs a dual-component architecture: an Analyst generates dynamic, explainable user profiles from student historical responses, and a Predictor utilizes these profiles to forecast future performance. The core of CIKT is a synergistic optimization loop. In this loop, the Analyst is iteratively refined based on the predictive accuracy of the Predictor, which conditions on the generated profiles, and the Predictor is subsequently retrained using these enhanced profiles. Evaluated on multiple educational datasets, CIKT demonstrates significant improvements in prediction accuracy, offers enhanced explainability through its dynamically updated user profiles, and exhibits improved scalability. Our work presents a robust and explainable solution for advancing knowledge tracing systems, effectively bridging the gap between predictive performance and model transparency.
Related papers
- TRAIL: Joint Inference and Refinement of Knowledge Graphs with Large Language Models [5.678291291711662]
TRAIL is a novel, unified framework for Thinking, Reasoning, And Incremental Learning.<n>It couples joint inference and dynamic KG refinement with large language models.<n>Extensive experiments on multiple benchmarks demonstrate that TRAIL outperforms existing KG-augmented and retrieval-augmented LLM baselines by 3% to 13%.
arXiv Detail & Related papers (2025-08-06T14:25:05Z) - Dynamic Programming Techniques for Enhancing Cognitive Representation in Knowledge Tracing [125.75923987618977]
We propose the Cognitive Representation Dynamic Programming based Knowledge Tracing (CRDP-KT) model.<n>It is a dynamic programming algorithm to optimize cognitive representations based on the difficulty of the questions and the performance intervals between them.<n>It provides more accurate and systematic input features for subsequent model training, thereby minimizing distortion in the simulation of cognitive states.
arXiv Detail & Related papers (2025-06-03T14:44:48Z) - Can LLMs Assist Expert Elicitation for Probabilistic Causal Modeling? [0.0]
This study investigates the potential of Large Language Models (LLMs) as an alternative to human expert elicitation for extracting structured causal knowledge.<n>LLMs generated causal structures, specifically Bayesian networks (BNs), were benchmarked against traditional statistical methods.<n>LLMs generated BNs demonstrated lower entropy than expert-elicited and statistically generated BNs, suggesting higher confidence and precision in predictions.
arXiv Detail & Related papers (2025-04-14T16:45:52Z) - AdvKT: An Adversarial Multi-Step Training Framework for Knowledge Tracing [64.79967583649407]
Knowledge Tracing (KT) monitors students' knowledge states and simulates their responses to question sequences.<n>Existing KT models typically follow a single-step training paradigm, which leads to significant error accumulation.<n>We propose a novel Adversarial Multi-Step Training Framework for Knowledge Tracing (AdvKT) which focuses on the multi-step KT task.
arXiv Detail & Related papers (2025-04-07T03:31:57Z) - Improving Question Embeddings with Cognitiv Representation Optimization for Knowledge Tracing [77.14348157016518]
The Knowledge Tracing (KT) aims to track changes in students' knowledge status and predict their future answers based on their historical answer records.<n>Current research on KT modeling focuses on predicting student' future performance based on existing, unupdated records of student learning interactions.<n>We propose a Cognitive Representation Optimization for Knowledge Tracing model, which utilizes a dynamic programming algorithm to optimize structure of cognitive representations.
arXiv Detail & Related papers (2025-04-05T09:32:03Z) - Self-Explaining Neural Networks for Business Process Monitoring [2.8499886197917443]
We introduce, to the best of our knowledge, the first *self-explaining neural network* architecture for predictive process monitoring.<n>Our framework trains an LSTM model that not only provides predictions but also outputs a concise explanation for each prediction.<n>We show that our method outperforms post-hoc approaches in terms of both the faithfulness of the generated explanations and substantial improvements in efficiency.
arXiv Detail & Related papers (2025-03-23T13:28:34Z) - AAKT: Enhancing Knowledge Tracing with Alternate Autoregressive Modeling [23.247238358162157]
Knowledge Tracing aims to predict students' future performances based on their former exercises and additional information in educational settings.<n>One of the primary challenges in autoregressive modeling for Knowledge Tracing is effectively representing the anterior (pre-response) and posterior (post-response) states of learners across exercises.<n>We propose a novel perspective on knowledge tracing task by treating it as a generative process, consistent with the principles of autoregressive models.
arXiv Detail & Related papers (2025-02-17T14:09:51Z) - Deep Sparse Latent Feature Models for Knowledge Graph Completion [24.342670268545085]
In this paper, we introduce a novel framework of sparse latent feature models for knowledge graphs.
Our approach not only effectively completes missing triples but also provides clear interpretability of the latent structures.
Our method significantly improves performance by revealing latent communities and producing interpretable representations.
arXiv Detail & Related papers (2024-11-24T03:17:37Z) - Predictive, scalable and interpretable knowledge tracing on structured domains [6.860460230412773]
PSI-KT is a hierarchical generative approach that explicitly models how both individual cognitive traits and the prerequisite structure of knowledge influence learning dynamics.
PSI-KT targets the real-world need for efficient personalization even with a growing body of learners and learning histories.
arXiv Detail & Related papers (2024-03-19T22:19:29Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - Continual Learners are Incremental Model Generalizers [70.34479702177988]
This paper extensively studies the impact of Continual Learning (CL) models as pre-trainers.
We find that the transfer quality of the representation often increases gradually without noticeable degradation in fine-tuning performance.
We propose a new fine-tuning scheme, GLobal Attention Discretization (GLAD), that preserves rich task-generic representation during solving downstream tasks.
arXiv Detail & Related papers (2023-06-21T05:26:28Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.