Investigating the Impact of Backward Strategy Learning in a Logic Tutor:
Aiding Subgoal Learning towards Improved Problem Solving
- URL: http://arxiv.org/abs/2208.04696v1
- Date: Wed, 27 Jul 2022 00:43:52 GMT
- Title: Investigating the Impact of Backward Strategy Learning in a Logic Tutor:
Aiding Subgoal Learning towards Improved Problem Solving
- Authors: Preya Shabrina, Behrooz Mostafavi, Mark Abdelshiheed, Min Chi, Tiffany
Barnes
- Abstract summary: The training session involved backward worked examples (BWE) and problem-solving (BPS) to help students learn backward strategy.
Our results showed that, when new problems were given to solve without any tutor help, students who were trained with both BWE and BPS outperformed students who received none of the treatment or only BWE during training.
- Score: 6.639504127104268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to derive subgoals reduces the gap between experts and students and
makes students prepared for future problem solving. Researchers have explored
subgoal labeled instructional materials with explanations in traditional
problem solving and within tutoring systems to help novices learn to subgoal.
However, only a little research is found on problem-solving strategies in
relationship with subgoal learning. Also, these strategies are under-explored
within computer-based tutors and learning environments. Backward
problem-solving strategy is closely related to the process of subgoaling, where
problem solving iteratively refines the goal into a new subgoal to reduce
difficulty. In this paper, we explore a training strategy for backward strategy
learning within an intelligent logic tutor that teaches logic proof
construction. The training session involved backward worked examples (BWE) and
problem-solving (BPS) to help students learn backward strategy towards
improving their subgoaling and problem-solving skills. To evaluate the training
strategy, we analyzed students' 1) experience with and engagement in learning
backward strategy, 2) performance, and 3) proof construction approaches in new
problems that they solved independently without tutor help after each level of
training and in post-test. Our results showed that, when new problems were
given to solve without any tutor help, students who were trained with both BWE
and BPS outperformed students who received none of the treatment or only BWE
during training. Additionally, students trained with both BWE and BPS derived
subgoals during proof construction with significantly higher efficiency than
the other two groups.
Related papers
- Towards the Pedagogical Steering of Large Language Models for Tutoring: A Case Study with Modeling Productive Failure [36.83786872708736]
One-to-one tutoring is one of the most efficient methods of teaching.
We create a prototype tutor for high school math following Productive Failure (PF), an advanced and effective learning design.
We quantitatively show that StratL succeeds in steering the LLM to follow a Productive Failure tutoring strategy.
arXiv Detail & Related papers (2024-10-03T16:15:41Z) - Encouraging Responsible Use of Generative AI in Education: A Reward-Based Learning Approach [0.7889270818022226]
This research introduces an innovative mathematical learning approach that integrates generative AI to cultivate a structured learning rather than quick solution.
The goal is to transition students from seeking quick fixes to engaging actively in a comprehensive learning experience.
arXiv Detail & Related papers (2024-06-26T14:27:24Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Causal Reinforcement Learning: A Survey [57.368108154871]
Reinforcement learning is an essential paradigm for solving sequential decision problems under uncertainty.
One of the main obstacles is that reinforcement learning agents lack a fundamental understanding of the world.
Causality offers a notable advantage as it can formalize knowledge in a systematic manner.
arXiv Detail & Related papers (2023-07-04T03:00:43Z) - Bridging Declarative, Procedural, and Conditional Metacognitive
Knowledge Gap Using Deep Reinforcement Learning [7.253181280137071]
In deductive domains, three metacognitive knowledge types in ascending order are declarative, procedural, and conditional learning.
This work leverages Deep Reinforcement Learning (DRL) in providing adaptive metacognitive interventions to bridge the gap between the three knowledge types.
Our results show that on both ITSs, DRL bridged the metacognitive knowledge gap between students and significantly improved their learning performance over their control peers.
arXiv Detail & Related papers (2023-04-23T20:07:07Z) - The Power of Nudging: Exploring Three Interventions for Metacognitive
Skills Instruction across Intelligent Tutoring Systems [6.639504127104268]
Students were trained on a logic tutor that supports a default forward-chaining and a backward-chaining strategy.
We investigated three types of interventions on teaching the Default students how and when to use which strategy on the logic tutor.
arXiv Detail & Related papers (2023-03-18T16:27:51Z) - Teacher-student curriculum learning for reinforcement learning [1.7259824817932292]
Reinforcement learning (rl) is a popular paradigm for sequential decision making problems.
The sample inefficiency of deep reinforcement learning methods is a significant obstacle when applying rl to real-world problems.
We propose a teacher-student curriculum learning setting where we simultaneously train a teacher that selects tasks for the student while the student learns how to solve the selected task.
arXiv Detail & Related papers (2022-10-31T14:45:39Z) - Teachable Reinforcement Learning via Advice Distillation [161.43457947665073]
We propose a new supervision paradigm for interactive learning based on "teachable" decision-making systems that learn from structured advice provided by an external teacher.
We show that agents that learn from advice can acquire new skills with significantly less human supervision than standard reinforcement learning algorithms.
arXiv Detail & Related papers (2022-03-19T03:22:57Z) - Rethinking Learning Dynamics in RL using Adversarial Networks [79.56118674435844]
We present a learning mechanism for reinforcement learning of closely related skills parameterized via a skill embedding space.
The main contribution of our work is to formulate an adversarial training regime for reinforcement learning with the help of entropy-regularized policy gradient formulation.
arXiv Detail & Related papers (2022-01-27T19:51:09Z) - Persistent Reinforcement Learning via Subgoal Curricula [114.83989499740193]
Value-accelerated Persistent Reinforcement Learning (VaPRL) generates a curriculum of initial states.
VaPRL reduces the interventions required by three orders of magnitude compared to episodic reinforcement learning.
arXiv Detail & Related papers (2021-07-27T16:39:45Z) - Dual Policy Distillation [58.43610940026261]
Policy distillation, which transfers a teacher policy to a student policy, has achieved great success in challenging tasks of deep reinforcement learning.
In this work, we introduce dual policy distillation(DPD), a student-student framework in which two learners operate on the same environment to explore different perspectives of the environment.
The key challenge in developing this dual learning framework is to identify the beneficial knowledge from the peer learner for contemporary learning-based reinforcement learning algorithms.
arXiv Detail & Related papers (2020-06-07T06:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.