Using Think-Aloud Data to Understand Relations between Self-Regulation
Cycle Characteristics and Student Performance in Intelligent Tutoring Systems
- URL: http://arxiv.org/abs/2312.05675v1
- Date: Sat, 9 Dec 2023 20:36:58 GMT
- Title: Using Think-Aloud Data to Understand Relations between Self-Regulation
Cycle Characteristics and Student Performance in Intelligent Tutoring Systems
- Authors: Conrad Borchers, Jiayi Zhang, Ryan S. Baker, Vincent Aleven
- Abstract summary: The present study investigates SRL behaviors in relationship to learners' moment-by-moment performance.
We demonstrate the feasibility of labeling SRL behaviors based on AI-generated think-aloud transcripts.
Students' actions during earlier, process-heavy stages of SRL cycles exhibited lower moment-by-moment correctness during problem-solving than later SRL cycle stages.
- Score: 15.239133633467672
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerous studies demonstrate the importance of self-regulation during
learning by problem-solving. Recent work in learning analytics has largely
examined students' use of SRL concerning overall learning gains. Limited
research has related SRL to in-the-moment performance differences among
learners. The present study investigates SRL behaviors in relationship to
learners' moment-by-moment performance while working with intelligent tutoring
systems for stoichiometry chemistry. We demonstrate the feasibility of labeling
SRL behaviors based on AI-generated think-aloud transcripts, identifying the
presence or absence of four SRL categories (processing information, planning,
enacting, and realizing errors) in each utterance. Using the SRL codes, we
conducted regression analyses to examine how the use of SRL in terms of
presence, frequency, cyclical characteristics, and recency relate to student
performance on subsequent steps in multi-step problems. A model considering
students' SRL cycle characteristics outperformed a model only using
in-the-moment SRL assessment. In line with theoretical predictions, students'
actions during earlier, process-heavy stages of SRL cycles exhibited lower
moment-by-moment correctness during problem-solving than later SRL cycle
stages. We discuss system re-design opportunities to add SRL support during
stages of processing and paths forward for using machine learning to speed
research depending on the assessment of SRL based on transcription of
think-aloud data.
Related papers
- Overcoming Slow Decision Frequencies in Continuous Control: Model-Based Sequence Reinforcement Learning for Model-Free Control [1.104960878651584]
We introduce Sequence Reinforcement Learning (SRL), an RL algorithm designed to produce a sequence of actions for a given input state.
SRL addresses the challenges of learning action sequences by employing both a model and an actor-critic architecture operating at different temporal scales.
We evaluate SRL on a suite of continuous control tasks, demonstrating that it achieves performance comparable to state-of-the-art algorithms.
arXiv Detail & Related papers (2024-10-11T16:54:07Z) - Reinforcement Learning for Online Testing of Autonomous Driving Systems: a Replication and Extension Study [15.949975158039452]
In a recent study, Reinforcement Learning has been shown to outperform alternative techniques for online testing of Deep Neural Network-enabled systems.
This work is a replication and extension of that empirical study.
Results show that our new RL agent is able to converge to an effective policy that outperforms random testing.
arXiv Detail & Related papers (2024-03-20T16:39:17Z) - Reinforcement Learning-assisted Evolutionary Algorithm: A Survey and
Research Opportunities [63.258517066104446]
Reinforcement learning integrated as a component in the evolutionary algorithm has demonstrated superior performance in recent years.
We discuss the RL-EA integration method, the RL-assisted strategy adopted by RL-EA, and its applications according to the existing literature.
In the applications of RL-EA section, we also demonstrate the excellent performance of RL-EA on several benchmarks and a range of public datasets.
arXiv Detail & Related papers (2023-08-25T15:06:05Z) - A User Study on Explainable Online Reinforcement Learning for Adaptive
Systems [0.802904964931021]
Online reinforcement learning (RL) is increasingly used for realizing adaptive systems in the presence of design time uncertainty.
Deep RL gaining interest, the learned knowledge is no longer explicitly represented, but is represented as a neural network.
XRL-DINE provides visual insights into why certain decisions were made at important time points.
arXiv Detail & Related papers (2023-07-09T05:12:42Z) - POAR: Efficient Policy Optimization via Online Abstract State
Representation Learning [6.171331561029968]
State Representation Learning (SRL) is proposed to specifically learn to encode task-relevant features from complex sensory data into low-dimensional states.
We introduce a new SRL prior called domain resemblance to leverage expert demonstration to improve SRL interpretations.
We empirically verify POAR to efficiently handle tasks in high dimensions and facilitate training real-life robots directly from scratch.
arXiv Detail & Related papers (2021-09-17T16:52:03Z) - Towards Standardizing Reinforcement Learning Approaches for Stochastic
Production Scheduling [77.34726150561087]
reinforcement learning can be used to solve scheduling problems.
Existing studies rely on (sometimes) complex simulations for which the code is unavailable.
There is a vast array of RL designs to choose from.
standardization of model descriptions - both production setup and RL design - and validation scheme are a prerequisite.
arXiv Detail & Related papers (2021-04-16T16:07:10Z) - Combining Pessimism with Optimism for Robust and Efficient Model-Based
Deep Reinforcement Learning [56.17667147101263]
In real-world tasks, reinforcement learning agents encounter situations that are not present during training time.
To ensure reliable performance, the RL agents need to exhibit robustness against worst-case situations.
We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem.
arXiv Detail & Related papers (2021-03-18T16:50:17Z) - Towards Continual Reinforcement Learning: A Review and Perspectives [69.48324517535549]
We aim to provide a literature review of different formulations and approaches to continual reinforcement learning (RL)
While still in its early days, the study of continual RL has the promise to develop better incremental reinforcement learners.
These include applications such as those in the fields of healthcare, education, logistics, and robotics.
arXiv Detail & Related papers (2020-12-25T02:35:27Z) - Deep Reinforcement Learning using Cyclical Learning Rates [62.19441737665902]
One of the most influential parameters in optimization procedures based on gradient descent (SGD) is the learning rate.
We investigate cyclical learning and propose a method for defining a general cyclical learning rate for various DRL problems.
Our experiments show that, utilizing cyclical learning achieves similar or even better results than highly tuned fixed learning rates.
arXiv Detail & Related papers (2020-07-31T10:06:02Z) - RL Unplugged: A Suite of Benchmarks for Offline Reinforcement Learning [108.9599280270704]
We propose a benchmark called RL Unplugged to evaluate and compare offline RL methods.
RL Unplugged includes data from a diverse range of domains including games and simulated motor control problems.
We will release data for all our tasks and open-source all algorithms presented in this paper.
arXiv Detail & Related papers (2020-06-24T17:14:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.