Let Me Try Again: Examining Replay Behavior by Tracing Students' Latent Problem-Solving Pathways
- URL: http://arxiv.org/abs/2601.11586v1
- Date: Sat, 03 Jan 2026 00:17:03 GMT
- Title: Let Me Try Again: Examining Replay Behavior by Tracing Students' Latent Problem-Solving Pathways
- Authors: Shan Zhang, Siddhartha Pradhan, Ji-Eun Lee, Ashish Gurung, Anthony F. Botelho,
- Abstract summary: Students' problem-solving pathways in game-based learning environments reflect conceptual understanding, procedural knowledge, and flexibility.<n>Replay behaviors, in particular, may indicate productive struggle or broader exploration, which in turn foster deeper learning.<n>This study addresses these gaps using Markov Chains and Hidden Markov Models on log data from 777 seventh graders playing the game-based learning platform of From Here to There!
- Score: 4.802354861089266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prior research has shown that students' problem-solving pathways in game-based learning environments reflect their conceptual understanding, procedural knowledge, and flexibility. Replay behaviors, in particular, may indicate productive struggle or broader exploration, which in turn foster deeper learning. However, little is known about how these pathways unfold sequentially across problems or how the timing of replays and other problem-solving strategies relates to proximal and distal learning outcomes. This study addresses these gaps using Markov Chains and Hidden Markov Models (HMMs) on log data from 777 seventh graders playing the game-based learning platform of From Here to There!. Results show that within problem sequences, students often persisted in states or engaged in immediate replay after successful completions, while across problems, strong self-transitions indicated stable strategic pathways. Four latent states emerged from HMMs: Incomplete-dominant, Optimal-ending, Replay, and Mixed. Regression analyses revealed that engagement in replay-dominant and optimal-ending states predicted higher conceptual knowledge, flexibility, and performance compared with the Incomplete-dominant state. Immediate replay consistently supported learning outcomes, whereas delayed replay was weakly or negatively associated in relation to Non-Replay. These findings suggest that replay in digital learning is not uniformly beneficial but depends on timing, with immediate replay supporting flexibility and more productive exploration.
Related papers
- Understanding Gaming the System by Analyzing Self-Regulated Learning in Think-Aloud Protocols [8.578186551478067]
This study explores whether students are cognitively disengaged or whether they engage in different self-regulated learning strategies when gaming largely unanswered.<n>We found that gaming does not simply reflect a lack of cognitive effort; during gaming, students often produced longer utterances.<n>With this understanding, future work can address gaming and its negative impacts by designing systems that target maladaptive self-regulation to promote better learning.
arXiv Detail & Related papers (2026-01-08T01:45:56Z) - Memory-enhanced Retrieval Augmentation for Long Video Understanding [91.7163732531159]
We introduce a novel memory-enhanced RAG-based approach called MemVid.<n>Our approach operates in four basic steps: 1) memorizing holistic video information, 2) reasoning about the task's information needs based on memory, 3) retrieving critical moments based on the information needs, and 4) focusing on the retrieved moments to produce the final answer.<n>MemVid demonstrates superior efficiency and effectiveness compared to both LVLMs and RAG methods.
arXiv Detail & Related papers (2025-03-12T08:23:32Z) - Layerwise Proximal Replay: A Proximal Point Method for Online Continual Learning [22.00843101957619]
In online continual learning, a neural network incrementally learns from a non-i.i.d. data stream.
Our work demonstrates a limitation of this approach: neural networks trained with experience replay tend to have unstable optimization trajectories.
We present Layerwise Proximal Replay (LPR), which balances learning from new and replay data while only allowing for gradual changes in the hidden activation of past data.
arXiv Detail & Related papers (2024-02-14T19:34:28Z) - Integrating Curricula with Replays: Its Effects on Continual Learning [3.2489082010225494]
Humans engage in learning and reviewing processes with curricula when acquiring new skills or knowledge.
The goal is to emulate the human learning process, thereby improving knowledge retention and facilitating learning transfer.
Existing replay methods in continual learning agents involve the random selection and ordering of data from previous tasks.
arXiv Detail & Related papers (2023-07-08T14:14:55Z) - PCR: Proxy-based Contrastive Replay for Online Class-Incremental
Continual Learning [16.67238259139417]
Existing replay-based methods effectively alleviate this issue by saving and replaying part of old data in a proxy-based or contrastive-based replay manner.
We propose a novel replay-based method called proxy-based contrastive replay (PCR)
arXiv Detail & Related papers (2023-04-10T06:35:19Z) - Adiabatic replay for continual learning [138.7878582237908]
generative replay spends an increasing amount of time just re-learning what is already known.
We propose a replay-based CL strategy that we term adiabatic replay (AR)
We verify experimentally that AR is superior to state-of-the-art deep generative replay using VAEs.
arXiv Detail & Related papers (2023-03-23T10:18:06Z) - Practical Recommendations for Replay-based Continual Learning Methods [18.559132470835937]
Continual Learning requires the model to learn from a stream of dynamic, non-stationary data without forgetting previous knowledge.
Replay approaches have empirically proved to be the most effective ones.
arXiv Detail & Related papers (2022-03-19T12:44:44Z) - An Investigation of Replay-based Approaches for Continual Learning [79.0660895390689]
Continual learning (CL) is a major challenge of machine learning (ML) and describes the ability to learn several tasks sequentially without catastrophic forgetting (CF)
Several solution classes have been proposed, of which so-called replay-based approaches seem very promising due to their simplicity and robustness.
We empirically investigate replay-based approaches of continual learning and assess their potential for applications.
arXiv Detail & Related papers (2021-08-15T15:05:02Z) - Reducing Representation Drift in Online Continual Learning [87.71558506591937]
We study the online continual learning paradigm, where agents must learn from a changing distribution with constrained memory and compute.
In this work we instead focus on the change in representations of previously observed data due to the introduction of previously unobserved class samples in the incoming data stream.
arXiv Detail & Related papers (2021-04-11T15:19:30Z) - Disturbing Reinforcement Learning Agents with Corrupted Rewards [62.997667081978825]
We analyze the effects of different attack strategies based on reward perturbations on reinforcement learning algorithms.
We show that smoothly crafting adversarial rewards are able to mislead the learner, and that using low exploration probability values, the policy learned is more robust to corrupt rewards.
arXiv Detail & Related papers (2021-02-12T15:53:48Z) - Revisiting Fundamentals of Experience Replay [91.24213515992595]
We present a systematic and extensive analysis of experience replay in Q-learning methods.
We focus on two fundamental properties: the replay capacity and the ratio of learning updates to experience collected.
arXiv Detail & Related papers (2020-07-13T21:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.