Understanding Revision Behavior in Adaptive Writing Support Systems for
Education
- URL: http://arxiv.org/abs/2306.10304v1
- Date: Sat, 17 Jun 2023 09:23:27 GMT
- Title: Understanding Revision Behavior in Adaptive Writing Support Systems for
Education
- Authors: Luca Mouchel, Thiemo Wambsganss, Paola Mejia-Domenzain and Tanja
K\"aser
- Abstract summary: We present a novel pipeline with insights into the revision behavior of students at scale.
We show that the tool was effective in promoting revision among the learners.
Our research contributes a pipeline for measuring SRL behaviors at scale in writing tasks.
- Score: 10.080007569933331
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Revision behavior in adaptive writing support systems is an important and
relatively new area of research that can improve the design and effectiveness
of these tools, and promote students' self-regulated learning (SRL).
Understanding how these tools are used is key to improving them to better
support learners in their writing and learning processes. In this paper, we
present a novel pipeline with insights into the revision behavior of students
at scale. We leverage a data set of two groups using an adaptive writing
support tool in an educational setting. With our novel pipeline, we show that
the tool was effective in promoting revision among the learners. Depending on
the writing feedback, we were able to analyze different strategies of learners
when revising their texts, we found that users of the exemplary case improved
over time and that females tend to be more efficient. Our research contributes
a pipeline for measuring SRL behaviors at scale in writing tasks (i.e.,
engagement or revision behavior) and informs the design of future adaptive
writing support systems for education, with the goal of enhancing their
effectiveness in supporting student writing. The source code is available at
https://github.com/lucamouchel/Understanding-Revision-Behavior.
Related papers
- Tool Learning with Large Language Models: A Survey [60.733557487886635]
Tool learning with large language models (LLMs) has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems.
Despite growing attention and rapid advancements in this field, the existing literature remains fragmented and lacks systematic organization.
arXiv Detail & Related papers (2024-05-28T08:01:26Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - In-Memory Learning: A Declarative Learning Framework for Large Language
Models [56.62616975119192]
We propose a novel learning framework that allows agents to align with their environment without relying on human-labeled data.
This entire process transpires within the memory components and is implemented through natural language.
We demonstrate the effectiveness of our framework and provide insights into this problem.
arXiv Detail & Related papers (2024-03-05T08:25:11Z) - Improving the Validity of Automatically Generated Feedback via
Reinforcement Learning [50.067342343957876]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)
Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - Generating Language Corrections for Teaching Physical Control Tasks [21.186109830294072]
CORGI is a model trained to generate language corrections for physical control tasks.
We show that CORGI can (i) generate valid feedback for novel student trajectories, (ii) outperform baselines on domains with novel control dynamics, and (iii) improve student learning in an interactive drawing task.
arXiv Detail & Related papers (2023-06-12T10:31:16Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - ArgRewrite V.2: an Annotated Argumentative Revisions Corpus [10.65107335326471]
ArgRewrite V.2 is a corpus of annotated argumentative revisions collected from two cycles of revisions to argumentative essays about self-driving cars.
The variety of revision unit scope and purpose granularity levels in ArgRewrite, along with the inclusion of new types of meta-data, can make it a useful resource for research and applications that involve revision analysis.
arXiv Detail & Related papers (2022-06-03T16:40:51Z) - Analyzing Adaptive Scaffolds that Help Students Develop Self-Regulated
Learning Behaviors [6.075903612065429]
This paper presents a systematic framework for adaptive scaffolding in Betty's Brain.
Students construct a causal model to teach a virtual agent, generically named Betty.
We analyze the impact of adaptive scaffolds on students' learning behaviors and performance.
arXiv Detail & Related papers (2022-02-20T00:02:31Z) - Open Source Software for Efficient and Transparent Reviews [0.11179881480027788]
ASReview is an open source machine learning-aided pipeline applying active learning.
We demonstrate by means of simulation studies that ASReview can yield far more efficient reviewing than manual reviewing.
arXiv Detail & Related papers (2020-06-22T11:57:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.