The Imperfect Learner: Incorporating Developmental Trajectories in Memory-based Student Simulation
- URL: http://arxiv.org/abs/2511.05903v1
- Date: Sat, 08 Nov 2025 08:05:43 GMT
- Title: The Imperfect Learner: Incorporating Developmental Trajectories in Memory-based Student Simulation
- Authors: Zhengyuan Liu, Stella Xin Yin, Bryan Chen Zhengyu Tan, Roy Ka-Wei Lee, Guimei Liu, Dion Hoe-Lian Goh, Wenya Wang, Nancy F. Chen,
- Abstract summary: This paper introduces a novel framework for memory-based student simulation.<n>It incorporates developmental trajectories through a hierarchical memory mechanism with structured knowledge representation.<n>In practice, we implement a curriculum-aligned simulator grounded on the Next Generation Science Standards.
- Score: 55.722188569369656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User simulation is important for developing and evaluating human-centered AI, yet current student simulation in educational applications has significant limitations. Existing approaches focus on single learning experiences and do not account for students' gradual knowledge construction and evolving skill sets. Moreover, large language models are optimized to produce direct and accurate responses, making it challenging to represent the incomplete understanding and developmental constraints that characterize real learners. In this paper, we introduce a novel framework for memory-based student simulation that incorporates developmental trajectories through a hierarchical memory mechanism with structured knowledge representation. The framework also integrates metacognitive processes and personality traits to enrich the individual learner profiling, through dynamical consolidation of both cognitive development and personal learning characteristics. In practice, we implement a curriculum-aligned simulator grounded on the Next Generation Science Standards. Experimental results show that our approach can effectively reflect the gradual nature of knowledge development and the characteristic difficulties students face, providing a more accurate representation of learning processes.
Related papers
- Simulating Students with Large Language Models: A Review of Architecture, Mechanisms, and Role Modelling in Education with Generative AI [0.8703455323398351]
Review of studies using large language models (LLMs) to simulate student behaviour across educational environments.<n>Wee current evidence on the capacity of LLM-based agents to emulate learner archetypes, respond to instructional inputs, and interact within multi-agent classroom scenarios.<n>We examine the implications of such systems for curriculum development, instructional evaluation, and teacher training.
arXiv Detail & Related papers (2025-11-08T17:23:13Z) - Evolution in Simulation: AI-Agent School with Dual Memory for High-Fidelity Educational Dynamics [10.185612854120627]
Large language models (LLMs) based Agents are increasingly pivotal in simulating and understanding complex human systems and interactions.<n>We propose the AI-Agent School (AAS) system, built around a self-evolving mechanism that leverages agents for simulating complex educational dynamics.
arXiv Detail & Related papers (2025-10-13T11:27:53Z) - Cognitive Structure Generation: From Educational Priors to Policy Optimization [10.932994688742475]
This paper introduces a novel framework, Cognitive Structure Generation (CSG), to generate students' cognitive structures.<n> Experimental results on four popular real-world education datasets show that cognitive structures generated by CSG offer more comprehensive and effective representations for student modeling.
arXiv Detail & Related papers (2025-08-18T06:21:36Z) - Unveiling the Learning Mind of Language Models: A Cognitive Framework and Empirical Study [45.82081693725339]
Large language models (LLMs) have shown impressive capabilities across tasks such as mathematics, coding, and reasoning.<n>Yet their learning ability, which is crucial for adapting to dynamic environments and acquiring new knowledge, remains underexplored.
arXiv Detail & Related papers (2025-06-16T13:24:50Z) - Dynamic Programming Techniques for Enhancing Cognitive Representation in Knowledge Tracing [125.75923987618977]
We propose the Cognitive Representation Dynamic Programming based Knowledge Tracing (CRDP-KT) model.<n>It is a dynamic programming algorithm to optimize cognitive representations based on the difficulty of the questions and the performance intervals between them.<n>It provides more accurate and systematic input features for subsequent model training, thereby minimizing distortion in the simulation of cognitive states.
arXiv Detail & Related papers (2025-06-03T14:44:48Z) - Improving Question Embeddings with Cognitive Representation Optimization for Knowledge Tracing [77.14348157016518]
Research on KT modeling focuses on predicting future student performance based on existing, unupdated records of student learning interactions.<n>We propose a knowledge-tracking cognitive representation optimization (CRO-KT) model that uses dynamic programming algorithms to optimize the structure of cognitive representation.
arXiv Detail & Related papers (2025-04-05T09:32:03Z) - Cognitive AI framework: advances in the simulation of human thought [0.0]
The Human Cognitive Simulation Framework represents a significant advancement in integrating human cognitive capabilities into artificial intelligence systems.<n>By merging short-term memory (conversation context), long-term memory (interaction context), advanced cognitive processing, and efficient knowledge management, it ensures contextual coherence and persistent data storage.<n>This framework lays the foundation for future research in continuous learning algorithms, sustainability, and multimodal adaptability, positioning Cognitive AI as a transformative model in emerging fields.
arXiv Detail & Related papers (2025-02-06T17:43:35Z) - In-Memory Learning: A Declarative Learning Framework for Large Language
Models [56.62616975119192]
We propose a novel learning framework that allows agents to align with their environment without relying on human-labeled data.
This entire process transpires within the memory components and is implemented through natural language.
We demonstrate the effectiveness of our framework and provide insights into this problem.
arXiv Detail & Related papers (2024-03-05T08:25:11Z) - Leveraging generative artificial intelligence to simulate student
learning behavior [13.171768256928509]
We explore the feasibility of using large language models (LLMs) to simulate student learning behaviors.
Unlike conventional machine learning based prediction, we leverage LLMs to instantiate virtual students with specific demographics.
Our objective is not merely to predict learning outcomes but to replicate learning behaviors and patterns of real students.
arXiv Detail & Related papers (2023-10-30T00:09:59Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.