Personalized Student Knowledge Modeling for Future Learning Resource Prediction
- URL: http://arxiv.org/abs/2505.14072v1
- Date: Tue, 20 May 2025 08:23:50 GMT
- Title: Personalized Student Knowledge Modeling for Future Learning Resource Prediction
- Authors: Soroush Hashemifar, Sherry Sahebi,
- Abstract summary: We propose Knowledge Modeling and Material Prediction (KMaP) for personalized and simultaneous modeling of student knowledge and behavior.<n>KMaP employs clustering-based student profiling to create personalized student representations, improving predictions of future learning resource preferences.<n>Experiments on two real-world datasets confirm significant behavioral differences across student clusters.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite advances in deep learning for education, student knowledge tracing and behavior modeling face persistent challenges: limited personalization, inadequate modeling of diverse learning activities (especially non-assessed materials), and overlooking the interplay between knowledge acquisition and behavioral patterns. Practical limitations, such as fixed-size sequence segmentation, frequently lead to the loss of contextual information vital for personalized learning. Moreover, reliance on student performance on assessed materials limits the modeling scope, excluding non-assessed interactions like lectures. To overcome these shortcomings, we propose Knowledge Modeling and Material Prediction (KMaP), a stateful multi-task approach designed for personalized and simultaneous modeling of student knowledge and behavior. KMaP employs clustering-based student profiling to create personalized student representations, improving predictions of future learning resource preferences. Extensive experiments on two real-world datasets confirm significant behavioral differences across student clusters and validate the efficacy of the KMaP model.
Related papers
- Dynamic Programming Techniques for Enhancing Cognitive Representation in Knowledge Tracing [125.75923987618977]
We propose the Cognitive Representation Dynamic Programming based Knowledge Tracing (CRDP-KT) model.<n>It is a dynamic programming algorithm to optimize cognitive representations based on the difficulty of the questions and the performance intervals between them.<n>It provides more accurate and systematic input features for subsequent model training, thereby minimizing distortion in the simulation of cognitive states.
arXiv Detail & Related papers (2025-06-03T14:44:48Z) - Improving Question Embeddings with Cognitiv Representation Optimization for Knowledge Tracing [77.14348157016518]
The Knowledge Tracing (KT) aims to track changes in students' knowledge status and predict their future answers based on their historical answer records.<n>Current research on KT modeling focuses on predicting student' future performance based on existing, unupdated records of student learning interactions.<n>We propose a Cognitive Representation Optimization for Knowledge Tracing model, which utilizes a dynamic programming algorithm to optimize structure of cognitive representations.
arXiv Detail & Related papers (2025-04-05T09:32:03Z) - Knowledge Graph Enhanced Generative Multi-modal Models for Class-Incremental Learning [51.0864247376786]
We introduce a Knowledge Graph Enhanced Generative Multi-modal model (KG-GMM) that builds an evolving knowledge graph throughout the learning process.<n>During testing, we propose a Knowledge Graph Augmented Inference method that locates specific categories by analyzing relationships within the generated text.
arXiv Detail & Related papers (2025-03-24T07:20:43Z) - DASKT: A Dynamic Affect Simulation Method for Knowledge Tracing [51.665582274736785]
Knowledge Tracing (KT) predicts future performance by students' historical computation, and understanding students' affective states can enhance the effectiveness of KT.<n>We propose Affect Dynamic Knowledge Tracing (DASKT) to explore the impact of various student affective states on their knowledge states.<n>Our research highlights a promising avenue for future studies, focusing on achieving high interpretability and accuracy.
arXiv Detail & Related papers (2025-01-18T10:02:10Z) - Student Data Paradox and Curious Case of Single Student-Tutor Model: Regressive Side Effects of Training LLMs for Personalized Learning [25.90420385230675]
The pursuit of personalized education has led to the integration of Large Language Models (LLMs) in developing intelligent tutoring systems.
Our research uncovers a fundamental challenge in this approach: the Student Data Paradox''
This paradox emerges when LLMs, trained on student data to understand learner behavior, inadvertently compromise their own factual knowledge and reasoning abilities.
arXiv Detail & Related papers (2024-04-23T15:57:55Z) - Augmenting Interpretable Knowledge Tracing by Ability Attribute and
Attention Mechanism [0.0]
Knowledge tracing aims to model students' past answer sequences to track the change in their knowledge acquisition during exercise activities.
Most existing approaches ignore the fact that students' abilities are constantly changing or vary between individuals.
We propose a novel model based on ability attributes and attention mechanism.
arXiv Detail & Related papers (2023-02-04T11:19:55Z) - Transition-Aware Multi-Activity Knowledge Tracing [2.9778695679660188]
Knowledge tracing aims to model student knowledge state given the student's sequence of learning activities.
Current KT solutions are not fit for modeling student learning from non-assessed learning activities.
We propose Transition-Aware Multi-activity Knowledge Tracing (TAMKOT)
arXiv Detail & Related papers (2023-01-26T21:49:24Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Predicting student performance using sequence classification with
time-based windows [1.5836913530330787]
We show that accurate predictive models can be built based on sequential patterns derived from students' behavioral data.
We present a methodology for capturing temporal aspects in behavioral data and analyze its influence on the predictive performance of the models.
The results of our improved sequence classification technique are capable of predicting student performance with high levels of accuracy, reaching 90 percent for course-specific models.
arXiv Detail & Related papers (2022-08-16T13:46:39Z) - Stimuli-Sensitive Hawkes Processes for Personalized Student
Procrastination Modeling [1.6822770693792826]
Student procrastination and cramming for deadlines are major challenges in online learning environments.
Previous attempts on dynamic modeling of student procrastination suffer from major issues.
We introduce a new personalized stimuli-sensitive Hawkes process model (SSHP) to predict students' next activity times.
arXiv Detail & Related papers (2021-01-29T22:07:07Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.