Designing Theory-Driven Analytics-Enhanced Self-Regulated Learning
Applications
- URL: http://arxiv.org/abs/2303.12388v1
- Date: Wed, 22 Mar 2023 08:52:54 GMT
- Title: Designing Theory-Driven Analytics-Enhanced Self-Regulated Learning
Applications
- Authors: Mohamed Amine Chatti, Volkan Y\"ucepur, Arham Muslim, Mouadh Guesmi,
Shoeb Joarder
- Abstract summary: There is an increased interest in the application of learning analytics (LA) to promote self-regulated learning (SRL)
This chapter seeks to explore theoretical underpinnings of the design of LA-enhanced SRL applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: There is an increased interest in the application of learning analytics (LA)
to promote self-regulated learning (SRL). A variety of LA dashboards and
indicators were proposed to support different crucial SRL processes, such as
planning, awareness, self-reflection, self-monitoring, and feedback. However,
the design of these dashboards and indicators is often without reference to
theories in learning science, human-computer interaction (HCI), and information
visualization (InfoVis). Moreover, there is a lack of theoretically sound
frameworks to guide the systematic design and development of LA dashboards and
indicators to scaffold SRL. This chapter seeks to explore theoretical
underpinnings of the design of LA-enhanced SRL applications, drawing from the
fields of learning science, HCI, and InfoVis. We first present the
Student-Centered Learning Analytics-enhanced Self-Regulated Learning (SCLA-SRL)
methodology for building theory-driven LA-enhanced SRL applications for and
with learners. We then put this methodology into practice by designing and
developing LA indicators to support novice programmers' SRL in a higher
education context.
Related papers
- Vintix: Action Model via In-Context Reinforcement Learning [72.65703565352769]
We present the first steps toward scaling ICRL by introducing a fixed, cross-domain model capable of learning behaviors through in-context reinforcement learning.
Our results demonstrate that Algorithm Distillation, a framework designed to facilitate ICRL, offers a compelling and competitive alternative to expert distillation to construct versatile action models.
arXiv Detail & Related papers (2025-01-31T18:57:08Z) - The FLoRA Engine: Using Analytics to Measure and Facilitate Learners' own Regulation Activities [6.043195170209631]
The FLoRA engine is developed to assist students, workers, and professionals in improving their self-regulated learning (SRL) skills.
The engine tracks learners' SRL behaviours during a learning task and provides automated scaffolding to help learners effectively regulate their learning.
arXiv Detail & Related papers (2024-12-12T23:46:20Z) - Enhancing LLM-Based Feedback: Insights from Intelligent Tutoring Systems and the Learning Sciences [0.0]
This work advocates careful and caring AIED research by going through previous research on feedback generation in ITS.
The main contributions of this paper include: an avocation of applying more cautious, theoretically grounded methods in feedback generation in the era of generative AI.
arXiv Detail & Related papers (2024-05-07T20:09:18Z) - Survey on Large Language Model-Enhanced Reinforcement Learning: Concept, Taxonomy, and Methods [18.771658054884693]
Large language models (LLMs) emerge as a promising avenue to augment reinforcement learning (RL) in aspects such as multi-task learning, sample efficiency, and high-level task planning.
We propose a structured taxonomy to systematically categorize LLMs' functionalities in RL, including four roles: information processor, reward designer, decision-maker, and generator.
arXiv Detail & Related papers (2024-03-30T08:28:08Z) - How Can LLM Guide RL? A Value-Based Approach [68.55316627400683]
Reinforcement learning (RL) has become the de facto standard practice for sequential decision-making problems by improving future acting policies with feedback.
Recent developments in large language models (LLMs) have showcased impressive capabilities in language understanding and generation, yet they fall short in exploration and self-improvement capabilities.
We develop an algorithm named LINVIT that incorporates LLM guidance as a regularization factor in value-based RL, leading to significant reductions in the amount of data needed for learning.
arXiv Detail & Related papers (2024-02-25T20:07:13Z) - Using Think-Aloud Data to Understand Relations between Self-Regulation
Cycle Characteristics and Student Performance in Intelligent Tutoring Systems [15.239133633467672]
The present study investigates SRL behaviors in relationship to learners' moment-by-moment performance.
We demonstrate the feasibility of labeling SRL behaviors based on AI-generated think-aloud transcripts.
Students' actions during earlier, process-heavy stages of SRL cycles exhibited lower moment-by-moment correctness during problem-solving than later SRL cycle stages.
arXiv Detail & Related papers (2023-12-09T20:36:58Z) - Reinforcement Learning-assisted Evolutionary Algorithm: A Survey and
Research Opportunities [63.258517066104446]
Reinforcement learning integrated as a component in the evolutionary algorithm has demonstrated superior performance in recent years.
We discuss the RL-EA integration method, the RL-assisted strategy adopted by RL-EA, and its applications according to the existing literature.
In the applications of RL-EA section, we also demonstrate the excellent performance of RL-EA on several benchmarks and a range of public datasets.
arXiv Detail & Related papers (2023-08-25T15:06:05Z) - Provable Reward-Agnostic Preference-Based Reinforcement Learning [61.39541986848391]
Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories.
We propose a theoretical reward-agnostic PbRL framework where exploratory trajectories that enable accurate learning of hidden reward functions are acquired.
arXiv Detail & Related papers (2023-05-29T15:00:09Z) - Designing Reinforcement Learning Algorithms for Digital Interventions:
Pre-implementation Guidelines [24.283342018185028]
Online reinforcement learning algorithms are increasingly used to personalize digital interventions in the fields of mobile health and online education.
Common challenges in designing and testing an RL algorithm in these settings include ensuring the RL algorithm can learn and run stably under real-time constraints.
We extend the PCS (Predictability, Computability, Stability) framework, a data science framework that incorporates best practices from machine learning and statistics in supervised learning.
arXiv Detail & Related papers (2022-06-08T15:05:28Z) - INFOrmation Prioritization through EmPOWERment in Visual Model-Based RL [90.06845886194235]
We propose a modified objective for model-based reinforcement learning (RL)
We integrate a term inspired by variational empowerment into a state-space model based on mutual information.
We evaluate the approach on a suite of vision-based robot control tasks with natural video backgrounds.
arXiv Detail & Related papers (2022-04-18T23:09:23Z) - Towards Continual Reinforcement Learning: A Review and Perspectives [69.48324517535549]
We aim to provide a literature review of different formulations and approaches to continual reinforcement learning (RL)
While still in its early days, the study of continual RL has the promise to develop better incremental reinforcement learners.
These include applications such as those in the fields of healthcare, education, logistics, and robotics.
arXiv Detail & Related papers (2020-12-25T02:35:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.