Bridging Declarative, Procedural, and Conditional Metacognitive
Knowledge Gap Using Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2304.11739v1
- Date: Sun, 23 Apr 2023 20:07:07 GMT
- Title: Bridging Declarative, Procedural, and Conditional Metacognitive
Knowledge Gap Using Deep Reinforcement Learning
- Authors: Mark Abdelshiheed, John Wesley Hostetter, Tiffany Barnes, Min Chi
- Abstract summary: In deductive domains, three metacognitive knowledge types in ascending order are declarative, procedural, and conditional learning.
This work leverages Deep Reinforcement Learning (DRL) in providing adaptive metacognitive interventions to bridge the gap between the three knowledge types.
Our results show that on both ITSs, DRL bridged the metacognitive knowledge gap between students and significantly improved their learning performance over their control peers.
- Score: 7.253181280137071
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In deductive domains, three metacognitive knowledge types in ascending order
are declarative, procedural, and conditional learning. This work leverages Deep
Reinforcement Learning (DRL) in providing adaptive metacognitive interventions
to bridge the gap between the three knowledge types and prepare students for
future learning across Intelligent Tutoring Systems (ITSs). Students received
these interventions that taught how and when to use a backward-chaining (BC)
strategy on a logic tutor that supports a default forward-chaining strategy.
Six weeks later, we trained students on a probability tutor that only supports
BC without interventions. Our results show that on both ITSs, DRL bridged the
metacognitive knowledge gap between students and significantly improved their
learning performance over their control peers. Furthermore, the DRL policy
adapted to the metacognitive development on the logic tutor across declarative,
procedural, and conditional students, causing their strategic decisions to be
more autonomous.
Related papers
- Chaos-based reinforcement learning with TD3 [3.04503073434724]
Chaos-based reinforcement learning (CBRL) is a method in which the agent's internal chaotic dynamics drives exploration.
This study introduced Twin Delayed Deep Deterministic Policy Gradients (TD3), which is one of the state-of-the-art deep reinforcement learning algorithms.
arXiv Detail & Related papers (2024-05-15T04:47:31Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - Generative AI in the Classroom: Can Students Remain Active Learners? [23.487653534242092]
Generative Artificial Intelligence (GAI) can be seen as a double-edged weapon in education.
This article focuses on the effects on students' active learning strategies and related metacognitive skills.
We present a framework for introducing pedagogical transparency in GAI-based educational applications.
arXiv Detail & Related papers (2023-10-04T22:33:46Z) - Leveraging Deep Reinforcement Learning for Metacognitive Interventions
across Intelligent Tutoring Systems [7.253181280137071]
This work compares two approaches to provide metacognitive interventions across Intelligent Tutoring Systems (ITSs)
In two consecutive semesters, we conducted two classroom experiments: Exp. 1 used a classic artificial intelligence approach to classify students into different metacognitive groups and provide static interventions based on their classified groups.
In Exp. 2, we leveraged Deep Reinforcement Learning (DRL) to provide adaptive interventions that consider the dynamic changes in the student's metacognitive levels.
arXiv Detail & Related papers (2023-04-17T12:10:50Z) - Mixing Backward- with Forward-Chaining for Metacognitive Skill
Acquisition and Transfer [9.702049806385011]
Students were trained on a logic tutor that supports a default forward-chaining (FC) and a backward-chaining (BC) strategy.
We investigated the impact of mixing BC with FC on teaching strategy- and time-awareness for nonStrTime students.
arXiv Detail & Related papers (2023-03-18T16:44:10Z) - Implicit Offline Reinforcement Learning via Supervised Learning [83.8241505499762]
Offline Reinforcement Learning (RL) via Supervised Learning is a simple and effective way to learn robotic skills from a dataset collected by policies of different expertise levels.
We show how implicit models can leverage return information and match or outperform explicit algorithms to acquire robotic skills from fixed datasets.
arXiv Detail & Related papers (2022-10-21T21:59:42Z) - Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning [92.18524491615548]
Contrastive self-supervised learning has been successfully integrated into the practice of (deep) reinforcement learning (RL)
We study how RL can be empowered by contrastive learning in a class of Markov decision processes (MDPs) and Markov games (MGs) with low-rank transitions.
Under the online setting, we propose novel upper confidence bound (UCB)-type algorithms that incorporate such a contrastive loss with online RL algorithms for MDPs or MGs.
arXiv Detail & Related papers (2022-07-29T17:29:08Z) - Rethinking Learning Dynamics in RL using Adversarial Networks [79.56118674435844]
We present a learning mechanism for reinforcement learning of closely related skills parameterized via a skill embedding space.
The main contribution of our work is to formulate an adversarial training regime for reinforcement learning with the help of entropy-regularized policy gradient formulation.
arXiv Detail & Related papers (2022-01-27T19:51:09Z) - MOLAM: A Mobile Multimodal Learning Analytics Conceptual Framework to
Support Student Self-Regulated Learning [0.0]
This chapter introduces a Mobile Multimodal Learning Analytics approach (MOLAM)
I argue that the development of student Self-Regulated Learning would benefit from the adoption of this approach.
arXiv Detail & Related papers (2020-12-18T18:55:33Z) - Transfer Learning in Deep Reinforcement Learning: A Survey [64.36174156782333]
Reinforcement learning is a learning paradigm for solving sequential decision-making problems.
Recent years have witnessed remarkable progress in reinforcement learning upon the fast development of deep neural networks.
transfer learning has arisen to tackle various challenges faced by reinforcement learning.
arXiv Detail & Related papers (2020-09-16T18:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.