MOLAM: A Mobile Multimodal Learning Analytics Conceptual Framework to
Support Student Self-Regulated Learning
- URL: http://arxiv.org/abs/2012.14308v1
- Date: Fri, 18 Dec 2020 18:55:33 GMT
- Title: MOLAM: A Mobile Multimodal Learning Analytics Conceptual Framework to
Support Student Self-Regulated Learning
- Authors: Mohammad Khalil
- Abstract summary: This chapter introduces a Mobile Multimodal Learning Analytics approach (MOLAM)
I argue that the development of student Self-Regulated Learning would benefit from the adoption of this approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online distance learning is highly learner-centred, requiring different
skills and competences from learners, as well as alternative approaches for
instructional design, student support, and provision of resources. Learner
autonomy and self-regulated learning (SRL) in online learning settings are
considered key success factors that predict student performance. SRL comprises
processes of planning, monitoring, action and reflection according to
Zimmerman. And typically focuses on three key features of learners: (1) use of
SRL strategies, (2) responsiveness to self-oriented feedback about learning
effectiveness, and (3) motivational processes. SRL has been identified as
having a direct correlation with students success, including improvements in
grades and the development of relevant skills and strategies. Such skills and
strategies are needed to become a successful lifelong learner. This chapter
introduces a Mobile Multimodal Learning Analytics approach (MOLAM). I argue
that the development of student Self-Regulated Learning would benefit from the
adoption of this approach, and that its use would allow continuous measurement
and provision of in-time support of student SRL in online learning contexts.
Related papers
- Let Students Take the Wheel: Introducing Post-Quantum Cryptography with Active Learning [4.804847392457553]
Post-quantum cryptography (PQC) has been identified as the solution to secure existing software systems.
This research proposes a novel active learning approach and assesses the best practices for teaching PQC to undergraduate and graduate students.
arXiv Detail & Related papers (2024-10-17T01:52:03Z) - Can We Delegate Learning to Automation?: A Comparative Study of LLM Chatbots, Search Engines, and Books [0.6776894728701932]
The transition from traditional resources like textbooks and web searches raises concerns among educators.
In this paper, we systematically uncover three main concerns from educators' perspectives.
Our results show that LLMs support comprehensive understanding of key concepts without promoting passive learning, though their effectiveness in knowledge retention was limited.
arXiv Detail & Related papers (2024-10-02T10:16:54Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - Unleash Model Potential: Bootstrapped Meta Self-supervised Learning [12.57396771974944]
Long-term goal of machine learning is to learn general visual representations from a small amount of data without supervision.
Self-supervised learning and meta-learning are two promising techniques to achieve this goal, but they both only partially capture the advantages.
We propose a novel Bootstrapped Meta Self-Supervised Learning framework that aims to simulate the human learning process.
arXiv Detail & Related papers (2023-08-28T02:49:07Z) - Visualizing Self-Regulated Learner Profiles in Dashboards: Design
Insights from Teachers [9.227158301570787]
We design and implement FlippED, a dashboard for monitoring students' self-regulated learning (SRL) behavior.
We evaluate the usability and actionability of the tool in semi-structured interviews with ten university teachers.
arXiv Detail & Related papers (2023-05-26T12:03:11Z) - Bridging Declarative, Procedural, and Conditional Metacognitive
Knowledge Gap Using Deep Reinforcement Learning [7.253181280137071]
In deductive domains, three metacognitive knowledge types in ascending order are declarative, procedural, and conditional learning.
This work leverages Deep Reinforcement Learning (DRL) in providing adaptive metacognitive interventions to bridge the gap between the three knowledge types.
Our results show that on both ITSs, DRL bridged the metacognitive knowledge gap between students and significantly improved their learning performance over their control peers.
arXiv Detail & Related papers (2023-04-23T20:07:07Z) - Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning [92.18524491615548]
Contrastive self-supervised learning has been successfully integrated into the practice of (deep) reinforcement learning (RL)
We study how RL can be empowered by contrastive learning in a class of Markov decision processes (MDPs) and Markov games (MGs) with low-rank transitions.
Under the online setting, we propose novel upper confidence bound (UCB)-type algorithms that incorporate such a contrastive loss with online RL algorithms for MDPs or MGs.
arXiv Detail & Related papers (2022-07-29T17:29:08Z) - Rethinking Learning Dynamics in RL using Adversarial Networks [79.56118674435844]
We present a learning mechanism for reinforcement learning of closely related skills parameterized via a skill embedding space.
The main contribution of our work is to formulate an adversarial training regime for reinforcement learning with the help of entropy-regularized policy gradient formulation.
arXiv Detail & Related papers (2022-01-27T19:51:09Z) - Self-directed Machine Learning [86.3709575146414]
In education science, self-directed learning has been shown to be more effective than passive teacher-guided learning.
We introduce the principal concept of Self-directed Machine Learning (SDML) and propose a framework for SDML.
Our proposed SDML process benefits from self task selection, self data selection, self model selection, self optimization strategy selection and self evaluation metric selection.
arXiv Detail & Related papers (2022-01-04T18:32:06Z) - RvS: What is Essential for Offline RL via Supervised Learning? [77.91045677562802]
Recent work has shown that supervised learning alone, without temporal difference (TD) learning, can be remarkably effective for offline RL.
In every environment suite we consider simply maximizing likelihood with two-layer feedforward is competitive.
They also probe the limits of existing RvS methods, which are comparatively weak on random data.
arXiv Detail & Related papers (2021-12-20T18:55:16Z) - Knowledge Transfer in Multi-Task Deep Reinforcement Learning for
Continuous Control [65.00425082663146]
We present a Knowledge Transfer based Multi-task Deep Reinforcement Learning framework (KTM-DRL) for continuous control.
In KTM-DRL, the multi-task agent first leverages an offline knowledge transfer algorithm to quickly learn a control policy from the experience of task-specific teachers.
The experimental results well justify the effectiveness of KTM-DRL and its knowledge transfer and online learning algorithms, as well as its superiority over the state-of-the-art by a large margin.
arXiv Detail & Related papers (2020-10-15T03:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.