Analyzing Adaptive Scaffolds that Help Students Develop Self-Regulated
Learning Behaviors
- URL: http://arxiv.org/abs/2202.09698v2
- Date: Wed, 1 Jun 2022 05:17:09 GMT
- Title: Analyzing Adaptive Scaffolds that Help Students Develop Self-Regulated
Learning Behaviors
- Authors: Anabil Munshi, Gautam Biswas, Ryan Baker, Jaclyn Ocumpaugh, Stephen
Hutt, Luc Paquette
- Abstract summary: This paper presents a systematic framework for adaptive scaffolding in Betty's Brain.
Students construct a causal model to teach a virtual agent, generically named Betty.
We analyze the impact of adaptive scaffolds on students' learning behaviors and performance.
- Score: 6.075903612065429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Providing adaptive scaffolds to help learners develop self-regulated learning
(SRL) processes has been an important goal for intelligent learning
environments. Adaptive scaffolding is especially important in open-ended
learning environments (OELE), where novice learners often face difficulties in
completing their learning tasks. This paper presents a systematic framework for
adaptive scaffolding in Betty's Brain, a learning-by-teaching OELE for middle
school science, where students construct a causal model to teach a virtual
agent, generically named Betty. We evaluate the adaptive scaffolding framework
and discuss its implications on the development of more effective scaffolds for
SRL in OELEs. We detect key cognitive/metacognitive inflection points, i.e.,
instances where students' behaviors and performance change as they work on
their learning tasks. At such inflection points, Mr. Davis (a mentor agent) or
Betty (the teachable agent) provide conversational feedback, focused on
strategies to help students become productive learners. We conduct a classroom
study with 98 middle schoolers to analyze the impact of adaptive scaffolds on
students' learning behaviors and performance. Adaptive scaffolding produced
mixed results, with some scaffolds (viz., strategic hints that supported
debugging and assessment of causal models) being generally more useful to
students than others (viz., encouragement prompts). We also note differences in
learning behaviors of High and Low performers after receiving scaffolds.
Overall, our findings suggest how adaptive scaffolding in OELEs like Betty's
Brain can be further improved to narrow the gap between High and Low
performers.
Related papers
- Guiding Empowerment Model: Liberating Neurodiversity in Online Higher Education [2.703906279696349]
We address the equity gap for neurodivergent and situationally limited learners by identifying the spectrum of dynamic factors that impact learning and function.
We suggest that by applying the mode through technology-enabled features such as customizable task management, guided varied content access, and guided multi-modal collaboration, major learning barriers will be removed.
arXiv Detail & Related papers (2024-10-24T16:05:38Z) - Scaffolding Language Learning via Multi-modal Tutoring Systems with Pedagogical Instructions [34.760230622675365]
Intelligent tutoring systems (ITSs) imitate human tutors and aim to provide customized instructions or feedback to learners.
With the emergence of generative artificial intelligence, large language models (LLMs) entitle the systems to complex and coherent conversational interactions.
We investigate how pedagogical instructions facilitate the scaffolding in ITSs, by conducting a case study on guiding children to describe images for language learning.
arXiv Detail & Related papers (2024-04-04T13:22:28Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Understanding Revision Behavior in Adaptive Writing Support Systems for
Education [10.080007569933331]
We present a novel pipeline with insights into the revision behavior of students at scale.
We show that the tool was effective in promoting revision among the learners.
Our research contributes a pipeline for measuring SRL behaviors at scale in writing tasks.
arXiv Detail & Related papers (2023-06-17T09:23:27Z) - Leveraging Deep Reinforcement Learning for Metacognitive Interventions
across Intelligent Tutoring Systems [7.253181280137071]
This work compares two approaches to provide metacognitive interventions across Intelligent Tutoring Systems (ITSs)
In two consecutive semesters, we conducted two classroom experiments: Exp. 1 used a classic artificial intelligence approach to classify students into different metacognitive groups and provide static interventions based on their classified groups.
In Exp. 2, we leveraged Deep Reinforcement Learning (DRL) to provide adaptive interventions that consider the dynamic changes in the student's metacognitive levels.
arXiv Detail & Related papers (2023-04-17T12:10:50Z) - MERMAIDE: Learning to Align Learners using Model-Based Meta-Learning [62.065503126104126]
We study how a principal can efficiently and effectively intervene on the rewards of a previously unseen learning agent in order to induce desirable outcomes.
This is relevant to many real-world settings like auctions or taxation, where the principal may not know the learning behavior nor the rewards of real people.
We introduce MERMAIDE, a model-based meta-learning framework to train a principal that can quickly adapt to out-of-distribution agents.
arXiv Detail & Related papers (2023-04-10T15:44:50Z) - Rethinking Learning Dynamics in RL using Adversarial Networks [79.56118674435844]
We present a learning mechanism for reinforcement learning of closely related skills parameterized via a skill embedding space.
The main contribution of our work is to formulate an adversarial training regime for reinforcement learning with the help of entropy-regularized policy gradient formulation.
arXiv Detail & Related papers (2022-01-27T19:51:09Z) - Searching to Learn with Instructional Scaffolding [7.159235937301605]
This paper investigates the incorporation of scaffolding into a search system employing three different strategies.
AQE_SC, the automatic expansion of user queries with relevant subtopics; CURATED_SC, the presenting of a manually curated static list of relevant subtopics on the search engine result page.
FEEDBACK_SC, which projects real-time feedback about a user's exploration of the topic space on top of the CURATED_SC visualization.
arXiv Detail & Related papers (2021-11-29T15:15:02Z) - Mind Your Outliers! Investigating the Negative Impact of Outliers on
Active Learning for Visual Question Answering [71.15403434929915]
We show that across 5 models and 4 datasets on the task of visual question answering, a wide variety of active learning approaches fail to outperform random selection.
We identify the problem as collective outliers -- groups of examples that active learning methods prefer to acquire but models fail to learn.
We show that active learning sample efficiency increases significantly as the number of collective outliers in the active learning pool decreases.
arXiv Detail & Related papers (2021-07-06T00:52:11Z) - Bridging the Imitation Gap by Adaptive Insubordination [88.35564081175642]
We show that when the teaching agent makes decisions with access to privileged information, this information is marginalized during imitation learning.
We propose 'Adaptive Insubordination' (ADVISOR) to address this gap.
ADVISOR dynamically weights imitation and reward-based reinforcement learning losses during training, enabling on-the-fly switching between imitation and exploration.
arXiv Detail & Related papers (2020-07-23T17:59:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.