Mixing Backward- with Forward-Chaining for Metacognitive Skill
Acquisition and Transfer
- URL: http://arxiv.org/abs/2303.12223v1
- Date: Sat, 18 Mar 2023 16:44:10 GMT
- Title: Mixing Backward- with Forward-Chaining for Metacognitive Skill
Acquisition and Transfer
- Authors: Mark Abdelshiheed, John Wesley Hostetter, Xi Yang, Tiffany Barnes, Min
Chi
- Abstract summary: Students were trained on a logic tutor that supports a default forward-chaining (FC) and a backward-chaining (BC) strategy.
We investigated the impact of mixing BC with FC on teaching strategy- and time-awareness for nonStrTime students.
- Score: 9.702049806385011
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Metacognitive skills have been commonly associated with preparation for
future learning in deductive domains. Many researchers have regarded strategy-
and time-awareness as two metacognitive skills that address how and when to use
a problem-solving strategy, respectively. It was shown that students who are
both strategy-and time-aware (StrTime) outperformed their nonStrTime peers
across deductive domains. In this work, students were trained on a logic tutor
that supports a default forward-chaining (FC) and a backward-chaining (BC)
strategy. We investigated the impact of mixing BC with FC on teaching strategy-
and time-awareness for nonStrTime students. During the logic instruction, the
experimental students (Exp) were provided with two BC worked examples and some
problems in BC to practice how and when to use BC. Meanwhile, their control
(Ctrl) and StrTime peers received no such intervention. Six weeks later, all
students went through a probability tutor that only supports BC to evaluate
whether the acquired metacognitive skills are transferred from logic. Our
results show that on both tutors, Exp outperformed Ctrl and caught up with
StrTime.
Related papers
- Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning [99.05401042153214]
In-context learning (ICL) is potentially attributed to two major abilities: task recognition (TR) and task learning (TL)
We take the first step by examining the pre-training dynamics of the emergence of ICL.
We propose a simple yet effective method to better integrate these two abilities for ICL at inference time.
arXiv Detail & Related papers (2024-06-20T06:37:47Z) - Chaos-based reinforcement learning with TD3 [3.04503073434724]
Chaos-based reinforcement learning (CBRL) is a method in which the agent's internal chaotic dynamics drives exploration.
This study introduced Twin Delayed Deep Deterministic Policy Gradients (TD3), which is one of the state-of-the-art deep reinforcement learning algorithms.
arXiv Detail & Related papers (2024-05-15T04:47:31Z) - A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - Bridging Declarative, Procedural, and Conditional Metacognitive
Knowledge Gap Using Deep Reinforcement Learning [7.253181280137071]
In deductive domains, three metacognitive knowledge types in ascending order are declarative, procedural, and conditional learning.
This work leverages Deep Reinforcement Learning (DRL) in providing adaptive metacognitive interventions to bridge the gap between the three knowledge types.
Our results show that on both ITSs, DRL bridged the metacognitive knowledge gap between students and significantly improved their learning performance over their control peers.
arXiv Detail & Related papers (2023-04-23T20:07:07Z) - Leveraging Deep Reinforcement Learning for Metacognitive Interventions
across Intelligent Tutoring Systems [7.253181280137071]
This work compares two approaches to provide metacognitive interventions across Intelligent Tutoring Systems (ITSs)
In two consecutive semesters, we conducted two classroom experiments: Exp. 1 used a classic artificial intelligence approach to classify students into different metacognitive groups and provide static interventions based on their classified groups.
In Exp. 2, we leveraged Deep Reinforcement Learning (DRL) to provide adaptive interventions that consider the dynamic changes in the student's metacognitive levels.
arXiv Detail & Related papers (2023-04-17T12:10:50Z) - The Power of Nudging: Exploring Three Interventions for Metacognitive
Skills Instruction across Intelligent Tutoring Systems [6.639504127104268]
Students were trained on a logic tutor that supports a default forward-chaining and a backward-chaining strategy.
We investigated three types of interventions on teaching the Default students how and when to use which strategy on the logic tutor.
arXiv Detail & Related papers (2023-03-18T16:27:51Z) - Accelerating Self-Supervised Learning via Efficient Training Strategies [98.26556609110992]
Time for training self-supervised deep networks remains an order of magnitude larger than its supervised counterparts.
Motivated by these issues, this paper investigates reducing the training time of recent self-supervised methods.
arXiv Detail & Related papers (2022-12-11T21:49:39Z) - Investigating the Impact of Backward Strategy Learning in a Logic Tutor:
Aiding Subgoal Learning towards Improved Problem Solving [6.639504127104268]
The training session involved backward worked examples (BWE) and problem-solving (BPS) to help students learn backward strategy.
Our results showed that, when new problems were given to solve without any tutor help, students who were trained with both BWE and BPS outperformed students who received none of the treatment or only BWE during training.
arXiv Detail & Related papers (2022-07-27T00:43:52Z) - Coach-assisted Multi-Agent Reinforcement Learning Framework for
Unexpected Crashed Agents [120.91291581594773]
We present a formal formulation of a cooperative multi-agent reinforcement learning system with unexpected crashes.
We propose a coach-assisted multi-agent reinforcement learning framework, which introduces a virtual coach agent to adjust the crash rate during training.
To the best of our knowledge, this work is the first to study the unexpected crashes in the multi-agent system.
arXiv Detail & Related papers (2022-03-16T08:22:45Z) - Online Meta-Critic Learning for Off-Policy Actor-Critic Methods [107.98781730288897]
Off-Policy Actor-Critic (Off-PAC) methods have proven successful in a variety of continuous control tasks.
We introduce a novel and flexible meta-critic that observes the learning process and meta-learns an additional loss for the actor.
arXiv Detail & Related papers (2020-03-11T14:39:49Z) - Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model
Distillation Approach [55.83558520598304]
We propose a brand new solution to reuse experiences and transfer value functions among multiple students via model distillation.
We also describe how to design an efficient communication protocol to exploit heterogeneous knowledge.
Our proposed framework, namely Learning and Teaching Categorical Reinforcement, shows promising performance on stabilizing and accelerating learning progress.
arXiv Detail & Related papers (2020-02-06T11:31:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.