The FLoRA Engine: Using Analytics to Measure and Facilitate Learners' own Regulation Activities
- URL: http://arxiv.org/abs/2412.09763v1
- Date: Thu, 12 Dec 2024 23:46:20 GMT
- Title: The FLoRA Engine: Using Analytics to Measure and Facilitate Learners' own Regulation Activities
- Authors: Xinyu Li, Yizhou Fan, Tongguang Li, Mladen Rakovic, Shaveen Singh, Joep van der Graaf, Lyn Lim, Johanna Moore, Inge Molenaar, Maria Bannert, Dragan Gasevic,
- Abstract summary: The FLoRA engine is developed to assist students, workers, and professionals in improving their self-regulated learning (SRL) skills.
The engine tracks learners' SRL behaviours during a learning task and provides automated scaffolding to help learners effectively regulate their learning.
- Score: 6.043195170209631
- License:
- Abstract: The focus of education is increasingly set on learners' ability to regulate their own learning within technology-enhanced learning environments (TELs). Prior research has shown that self-regulated learning (SRL) leads to better learning performance. However, many learners struggle to self-regulate their learning productively, as they typically need to navigate a myriad of cognitive, metacognitive, and motivational processes that SRL demands. To address these challenges, the FLoRA engine is developed to assist students, workers, and professionals in improving their SRL skills and becoming productive lifelong learners. FLoRA incorporates several learning tools that are grounded in SRL theory and enhanced with learning analytics (LA), aimed at improving learners' mastery of different SRL skills. The engine tracks learners' SRL behaviours during a learning task and provides automated scaffolding to help learners effectively regulate their learning. The main contributions of FLoRA include (1) creating instrumentation tools that unobtrusively collect intensively sampled, fine-grained, and temporally ordered trace data about learners' learning actions, (2) building a trace parser that uses LA and related analytical technique (e.g., process mining) to model and understand learners' SRL processes, and (3) providing a scaffolding module that presents analytics-based adaptive, personalised scaffolds based on students' learning progress. The architecture and implementation of the FLoRA engine are also discussed in this paper.
Related papers
- LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System [54.71619734800526]
GenMentor is a multi-agent framework designed to deliver goal-oriented, personalized learning within ITS.
It maps learners' goals to required skills using a fine-tuned LLM trained on a custom goal-to-skill dataset.
GenMentor tailors learning content with an exploration-drafting-integration mechanism to align with individual learner needs.
arXiv Detail & Related papers (2025-01-27T03:29:44Z) - Self-Tuning: Instructing LLMs to Effectively Acquire New Knowledge through Self-Teaching [67.11497198002165]
Large language models (LLMs) often struggle to provide up-to-date information.
Existing approaches typically involve continued pre-training on new documents.
Motivated by the success of the Feynman Technique in efficient human learning, we introduce Self-Tuning.
arXiv Detail & Related papers (2024-06-10T14:42:20Z) - LLMs in the Imaginarium: Tool Learning through Simulated Trial and Error [54.954211216847135]
Existing large language models (LLMs) only reach a correctness rate in the range of 30% to 60%.
We propose a biologically inspired method for tool-augmented LLMs, simulated trial and error (STE)
STE orchestrates three key mechanisms for successful tool use behaviors in the biological system: trial and error, imagination, and memory.
arXiv Detail & Related papers (2024-03-07T18:50:51Z) - Tracking Control for a Spherical Pendulum via Curriculum Reinforcement
Learning [27.73555826776087]
Reinforcement Learning (RL) allows learning non-trivial robot control laws purely from data.
In this paper, we pair a recent algorithm for automatically building curricula with RL on massively parallelized simulations.
We demonstrate the potential of curriculum RL to jointly learn state estimation and control for non-linear tracking tasks.
arXiv Detail & Related papers (2023-09-25T12:48:47Z) - A User Study on Explainable Online Reinforcement Learning for Adaptive
Systems [0.802904964931021]
Online reinforcement learning (RL) is increasingly used for realizing adaptive systems in the presence of design time uncertainty.
Deep RL gaining interest, the learned knowledge is no longer explicitly represented, but is represented as a neural network.
XRL-DINE provides visual insights into why certain decisions were made at important time points.
arXiv Detail & Related papers (2023-07-09T05:12:42Z) - Visualizing Self-Regulated Learner Profiles in Dashboards: Design
Insights from Teachers [9.227158301570787]
We design and implement FlippED, a dashboard for monitoring students' self-regulated learning (SRL) behavior.
We evaluate the usability and actionability of the tool in semi-structured interviews with ten university teachers.
arXiv Detail & Related papers (2023-05-26T12:03:11Z) - Designing Theory-Driven Analytics-Enhanced Self-Regulated Learning
Applications [0.0]
There is an increased interest in the application of learning analytics (LA) to promote self-regulated learning (SRL)
This chapter seeks to explore theoretical underpinnings of the design of LA-enhanced SRL applications.
arXiv Detail & Related papers (2023-03-22T08:52:54Z) - Learning to Optimize for Reinforcement Learning [58.01132862590378]
Reinforcement learning (RL) is essentially different from supervised learning, and in practice, these learneds do not work well even in simple RL tasks.
Agent-gradient distribution is non-independent and identically distributed, leading to inefficient meta-training.
We show that, although only trained in toy tasks, our learned can generalize unseen complex tasks in Brax.
arXiv Detail & Related papers (2023-02-03T00:11:02Z) - Implicit Offline Reinforcement Learning via Supervised Learning [83.8241505499762]
Offline Reinforcement Learning (RL) via Supervised Learning is a simple and effective way to learn robotic skills from a dataset collected by policies of different expertise levels.
We show how implicit models can leverage return information and match or outperform explicit algorithms to acquire robotic skills from fixed datasets.
arXiv Detail & Related papers (2022-10-21T21:59:42Z) - RvS: What is Essential for Offline RL via Supervised Learning? [77.91045677562802]
Recent work has shown that supervised learning alone, without temporal difference (TD) learning, can be remarkably effective for offline RL.
In every environment suite we consider simply maximizing likelihood with two-layer feedforward is competitive.
They also probe the limits of existing RvS methods, which are comparatively weak on random data.
arXiv Detail & Related papers (2021-12-20T18:55:16Z) - MOLAM: A Mobile Multimodal Learning Analytics Conceptual Framework to
Support Student Self-Regulated Learning [0.0]
This chapter introduces a Mobile Multimodal Learning Analytics approach (MOLAM)
I argue that the development of student Self-Regulated Learning would benefit from the adoption of this approach.
arXiv Detail & Related papers (2020-12-18T18:55:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.