The FLoRA Engine: Using Analytics to Measure and Facilitate Learners' own Regulation Activities
- URL: http://arxiv.org/abs/2412.09763v1
- Date: Thu, 12 Dec 2024 23:46:20 GMT
- Title: The FLoRA Engine: Using Analytics to Measure and Facilitate Learners' own Regulation Activities
- Authors: Xinyu Li, Yizhou Fan, Tongguang Li, Mladen Rakovic, Shaveen Singh, Joep van der Graaf, Lyn Lim, Johanna Moore, Inge Molenaar, Maria Bannert, Dragan Gasevic,
- Abstract summary: The FLoRA engine is developed to assist students, workers, and professionals in improving their self-regulated learning (SRL) skills.<n>The engine tracks learners' SRL behaviours during a learning task and provides automated scaffolding to help learners effectively regulate their learning.
- Score: 6.043195170209631
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The focus of education is increasingly set on learners' ability to regulate their own learning within technology-enhanced learning environments (TELs). Prior research has shown that self-regulated learning (SRL) leads to better learning performance. However, many learners struggle to self-regulate their learning productively, as they typically need to navigate a myriad of cognitive, metacognitive, and motivational processes that SRL demands. To address these challenges, the FLoRA engine is developed to assist students, workers, and professionals in improving their SRL skills and becoming productive lifelong learners. FLoRA incorporates several learning tools that are grounded in SRL theory and enhanced with learning analytics (LA), aimed at improving learners' mastery of different SRL skills. The engine tracks learners' SRL behaviours during a learning task and provides automated scaffolding to help learners effectively regulate their learning. The main contributions of FLoRA include (1) creating instrumentation tools that unobtrusively collect intensively sampled, fine-grained, and temporally ordered trace data about learners' learning actions, (2) building a trace parser that uses LA and related analytical technique (e.g., process mining) to model and understand learners' SRL processes, and (3) providing a scaffolding module that presents analytics-based adaptive, personalised scaffolds based on students' learning progress. The architecture and implementation of the FLoRA engine are also discussed in this paper.
Related papers
- ToolRL: Reward is All Tool Learning Needs [54.16305891389931]
Large Language Models (LLMs) often undergo supervised fine-tuning (SFT) to acquire tool use capabilities.
Recent advancements in reinforcement learning (RL) have demonstrated promising reasoning and generalization abilities.
We present the first comprehensive study on reward design for tool selection and application tasks within the RL paradigm.
arXiv Detail & Related papers (2025-04-16T21:45:32Z) - R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning [87.30285670315334]
textbfR1-Searcher is a novel two-stage outcome-based RL approach designed to enhance the search capabilities of Large Language Models.
Our framework relies exclusively on RL, without requiring process rewards or distillation for a cold start.
Our experiments demonstrate that our method significantly outperforms previous strong RAG methods, even when compared to the closed-source GPT-4o-mini.
arXiv Detail & Related papers (2025-03-07T17:14:44Z) - LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System [54.71619734800526]
GenMentor is a multi-agent framework designed to deliver goal-oriented, personalized learning within ITS.
It maps learners' goals to required skills using a fine-tuned LLM trained on a custom goal-to-skill dataset.
GenMentor tailors learning content with an exploration-drafting-integration mechanism to align with individual learner needs.
arXiv Detail & Related papers (2025-01-27T03:29:44Z) - Self-Tuning: Instructing LLMs to Effectively Acquire New Knowledge through Self-Teaching [67.11497198002165]
Large language models (LLMs) often struggle to provide up-to-date information due to their one-time training.
Motivated by the remarkable success of the Feynman Technique in efficient human learning, we introduce Self-Tuning.
arXiv Detail & Related papers (2024-06-10T14:42:20Z) - Tracking Control for a Spherical Pendulum via Curriculum Reinforcement
Learning [27.73555826776087]
Reinforcement Learning (RL) allows learning non-trivial robot control laws purely from data.
In this paper, we pair a recent algorithm for automatically building curricula with RL on massively parallelized simulations.
We demonstrate the potential of curriculum RL to jointly learn state estimation and control for non-linear tracking tasks.
arXiv Detail & Related papers (2023-09-25T12:48:47Z) - A User Study on Explainable Online Reinforcement Learning for Adaptive
Systems [0.802904964931021]
Online reinforcement learning (RL) is increasingly used for realizing adaptive systems in the presence of design time uncertainty.
Deep RL gaining interest, the learned knowledge is no longer explicitly represented, but is represented as a neural network.
XRL-DINE provides visual insights into why certain decisions were made at important time points.
arXiv Detail & Related papers (2023-07-09T05:12:42Z) - Visualizing Self-Regulated Learner Profiles in Dashboards: Design
Insights from Teachers [9.227158301570787]
We design and implement FlippED, a dashboard for monitoring students' self-regulated learning (SRL) behavior.
We evaluate the usability and actionability of the tool in semi-structured interviews with ten university teachers.
arXiv Detail & Related papers (2023-05-26T12:03:11Z) - Designing Theory-Driven Analytics-Enhanced Self-Regulated Learning
Applications [0.0]
There is an increased interest in the application of learning analytics (LA) to promote self-regulated learning (SRL)
This chapter seeks to explore theoretical underpinnings of the design of LA-enhanced SRL applications.
arXiv Detail & Related papers (2023-03-22T08:52:54Z) - Ensemble Reinforcement Learning: A Survey [43.17635633600716]
Reinforcement Learning (RL) has emerged as a highly effective technique for addressing various scientific and applied problems.
In response, ensemble reinforcement learning (ERL), a promising approach that combines the benefits of both RL and ensemble learning (EL), has gained widespread popularity.
ERL leverages multiple models or training algorithms to comprehensively explore the problem space and possesses strong generalization capabilities.
arXiv Detail & Related papers (2023-03-05T09:26:44Z) - Learning to Optimize for Reinforcement Learning [58.01132862590378]
Reinforcement learning (RL) is essentially different from supervised learning, and in practice, these learneds do not work well even in simple RL tasks.
Agent-gradient distribution is non-independent and identically distributed, leading to inefficient meta-training.
We show that, although only trained in toy tasks, our learned can generalize unseen complex tasks in Brax.
arXiv Detail & Related papers (2023-02-03T00:11:02Z) - RvS: What is Essential for Offline RL via Supervised Learning? [77.91045677562802]
Recent work has shown that supervised learning alone, without temporal difference (TD) learning, can be remarkably effective for offline RL.
In every environment suite we consider simply maximizing likelihood with two-layer feedforward is competitive.
They also probe the limits of existing RvS methods, which are comparatively weak on random data.
arXiv Detail & Related papers (2021-12-20T18:55:16Z) - MOLAM: A Mobile Multimodal Learning Analytics Conceptual Framework to
Support Student Self-Regulated Learning [0.0]
This chapter introduces a Mobile Multimodal Learning Analytics approach (MOLAM)
I argue that the development of student Self-Regulated Learning would benefit from the adoption of this approach.
arXiv Detail & Related papers (2020-12-18T18:55:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.