The Power of Nudging: Exploring Three Interventions for Metacognitive
Skills Instruction across Intelligent Tutoring Systems
- URL: http://arxiv.org/abs/2303.11965v1
- Date: Sat, 18 Mar 2023 16:27:51 GMT
- Title: The Power of Nudging: Exploring Three Interventions for Metacognitive
Skills Instruction across Intelligent Tutoring Systems
- Authors: Mark Abdelshiheed, John Wesley Hostetter, Preya Shabrina, Tiffany
Barnes, Min Chi
- Abstract summary: Students were trained on a logic tutor that supports a default forward-chaining and a backward-chaining strategy.
We investigated three types of interventions on teaching the Default students how and when to use which strategy on the logic tutor.
- Score: 6.639504127104268
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deductive domains are typical of many cognitive skills in that no single
problem-solving strategy is always optimal for solving all problems. It was
shown that students who know how and when to use each strategy (StrTime)
outperformed those who know neither and stick to the default strategy
(Default). In this work, students were trained on a logic tutor that supports a
default forward-chaining and a backward-chaining (BC) strategy, then a
probability tutor that only supports BC. We investigated three types of
interventions on teaching the Default students how and when to use which
strategy on the logic tutor: Example, Nudge and Presented. Meanwhile, StrTime
students received no interventions. Overall, our results show that Nudge
outperformed their Default peers and caught up with StrTime on both tutors.
Related papers
- A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - Outlier Robust Adversarial Training [57.06824365801612]
We introduce Outlier Robust Adversarial Training (ORAT) in this work.
ORAT is based on a bi-level optimization formulation of adversarial training with a robust rank-based loss function.
We show that the learning objective of ORAT satisfies the $mathcalH$-consistency in binary classification, which establishes it as a proper surrogate to adversarial 0/1 loss.
arXiv Detail & Related papers (2023-09-10T21:36:38Z) - Bridging Declarative, Procedural, and Conditional Metacognitive
Knowledge Gap Using Deep Reinforcement Learning [7.253181280137071]
In deductive domains, three metacognitive knowledge types in ascending order are declarative, procedural, and conditional learning.
This work leverages Deep Reinforcement Learning (DRL) in providing adaptive metacognitive interventions to bridge the gap between the three knowledge types.
Our results show that on both ITSs, DRL bridged the metacognitive knowledge gap between students and significantly improved their learning performance over their control peers.
arXiv Detail & Related papers (2023-04-23T20:07:07Z) - Leveraging Deep Reinforcement Learning for Metacognitive Interventions
across Intelligent Tutoring Systems [7.253181280137071]
This work compares two approaches to provide metacognitive interventions across Intelligent Tutoring Systems (ITSs)
In two consecutive semesters, we conducted two classroom experiments: Exp. 1 used a classic artificial intelligence approach to classify students into different metacognitive groups and provide static interventions based on their classified groups.
In Exp. 2, we leveraged Deep Reinforcement Learning (DRL) to provide adaptive interventions that consider the dynamic changes in the student's metacognitive levels.
arXiv Detail & Related papers (2023-04-17T12:10:50Z) - Mixing Backward- with Forward-Chaining for Metacognitive Skill
Acquisition and Transfer [9.702049806385011]
Students were trained on a logic tutor that supports a default forward-chaining (FC) and a backward-chaining (BC) strategy.
We investigated the impact of mixing BC with FC on teaching strategy- and time-awareness for nonStrTime students.
arXiv Detail & Related papers (2023-03-18T16:44:10Z) - Accelerating Self-Supervised Learning via Efficient Training Strategies [98.26556609110992]
Time for training self-supervised deep networks remains an order of magnitude larger than its supervised counterparts.
Motivated by these issues, this paper investigates reducing the training time of recent self-supervised methods.
arXiv Detail & Related papers (2022-12-11T21:49:39Z) - You Only Live Once: Single-Life Reinforcement Learning [124.1738675154651]
In many real-world situations, the goal might not be to learn a policy that can do the task repeatedly, but simply to perform a new task successfully once in a single trial.
We formalize this problem setting, where an agent must complete a task within a single episode without interventions.
We propose an algorithm, $Q$-weighted adversarial learning (QWALE), which employs a distribution matching strategy.
arXiv Detail & Related papers (2022-10-17T09:00:11Z) - Investigating the Impact of Backward Strategy Learning in a Logic Tutor:
Aiding Subgoal Learning towards Improved Problem Solving [6.639504127104268]
The training session involved backward worked examples (BWE) and problem-solving (BPS) to help students learn backward strategy.
Our results showed that, when new problems were given to solve without any tutor help, students who were trained with both BWE and BPS outperformed students who received none of the treatment or only BWE during training.
arXiv Detail & Related papers (2022-07-27T00:43:52Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - A Survey on Cost Types, Interaction Schemes, and Annotator Performance
Models in Selection Algorithms for Active Learning in Classification [1.539335655168078]
Pool-based active learning aims to optimize the annotation process.
An AL strategy queries annotations intelligently from annotators to train a high-performance classification model.
arXiv Detail & Related papers (2021-09-23T11:17:50Z) - Disturbing Reinforcement Learning Agents with Corrupted Rewards [62.997667081978825]
We analyze the effects of different attack strategies based on reward perturbations on reinforcement learning algorithms.
We show that smoothly crafting adversarial rewards are able to mislead the learner, and that using low exploration probability values, the policy learned is more robust to corrupt rewards.
arXiv Detail & Related papers (2021-02-12T15:53:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.