Predicting the Quality of Revisions in Argumentative Writing
- URL: http://arxiv.org/abs/2306.00667v1
- Date: Thu, 1 Jun 2023 13:39:33 GMT
- Title: Predicting the Quality of Revisions in Argumentative Writing
- Authors: Zhexiong Liu, Diane Litman, Elaine Wang, Lindsay Matsumura, Richard
Correnti
- Abstract summary: Chain-of-Thought prompts facilitate ChatGPT-generated ACs for AR quality predictions.
Experiments on two corpora, our annotated elementary essays and existing college essays benchmark, demonstrate the superiority of the proposed ACs over baselines.
- Score: 2.0572032297930503
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to revise in response to feedback is critical to students'
writing success. In the case of argument writing in specific, identifying
whether an argument revision (AR) is successful or not is a complex problem
because AR quality is dependent on the overall content of an argument. For
example, adding the same evidence sentence could strengthen or weaken existing
claims in different argument contexts (ACs). To address this issue we developed
Chain-of-Thought prompts to facilitate ChatGPT-generated ACs for AR quality
predictions. The experiments on two corpora, our annotated elementary essays
and existing college essays benchmark, demonstrate the superiority of the
proposed ACs over baselines.
Related papers
- A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality [12.187586364960758]
We present a German corpus of 1,320 essays from school students of two age groups.
Each essay has been manually annotated for argumentative structure and quality on multiple levels of granularity.
We propose baseline approaches to argument mining and essay scoring, and we analyze interactions between both tasks.
arXiv Detail & Related papers (2024-04-03T07:31:53Z) - Argument Quality Assessment in the Age of Instruction-Following Large Language Models [45.832808321166844]
A critical task in any such application is the assessment of an argument's quality.
We identify the diversity of quality notions and the subjectiveness of their perception as the main hurdles towards substantial progress on argument quality assessment.
We argue that the capabilities of instruction-following large language models (LLMs) to leverage knowledge across contexts enable a much more reliable assessment.
arXiv Detail & Related papers (2024-03-24T10:43:21Z) - CASA: Causality-driven Argument Sufficiency Assessment [79.13496878681309]
We propose CASA, a zero-shot causality-driven argument sufficiency assessment framework.
PS measures how likely introducing the premise event would lead to the conclusion when both the premise and conclusion events are absent.
Experiments on two logical fallacy detection datasets demonstrate that CASA accurately identifies insufficient arguments.
arXiv Detail & Related papers (2024-01-10T16:21:18Z) - Argue with Me Tersely: Towards Sentence-Level Counter-Argument
Generation [62.069374456021016]
We present the ArgTersely benchmark for sentence-level counter-argument generation.
We also propose Arg-LlaMA for generating high-quality counter-argument.
arXiv Detail & Related papers (2023-12-21T06:51:34Z) - Exploring Jiu-Jitsu Argumentation for Writing Peer Review Rebuttals [70.22179850619519]
In many domains of argumentation, people's arguments are driven by so-called attitude roots.
Recent work in psychology suggests that instead of directly countering surface-level reasoning, one should follow an argumentation style inspired by the Jiu-Jitsu'soft' combat system.
We are the first to explore Jiu-Jitsu argumentation for peer review by proposing the novel task of attitude and theme-guided rebuttal generation.
arXiv Detail & Related papers (2023-11-07T13:54:01Z) - To Revise or Not to Revise: Learning to Detect Improvable Claims for
Argumentative Writing Support [20.905660642919052]
We explore the main challenges to identifying argumentative claims in need of specific revisions.
We propose a new sampling strategy based on revision distance.
We provide evidence that using contextual information and domain knowledge can further improve prediction results.
arXiv Detail & Related papers (2023-05-26T10:19:54Z) - Contextualizing Argument Quality Assessment with Relevant Knowledge [11.367297319588411]
SPARK is a novel method for scoring argument quality based on contextualization via relevant knowledge.
We devise four augmentations that leverage large language models to provide feedback, infer hidden assumptions, supply a similar-quality argument, or give a counter-argument.
arXiv Detail & Related papers (2023-05-20T21:04:58Z) - Persua: A Visual Interactive System to Enhance the Persuasiveness of
Arguments in Online Discussion [52.49981085431061]
Enhancing people's ability to write persuasive arguments could contribute to the effectiveness and civility in online communication.
We derived four design goals for a tool that helps users improve the persuasiveness of arguments in online discussions.
Persua is an interactive visual system that provides example-based guidance on persuasive strategies to enhance the persuasiveness of arguments.
arXiv Detail & Related papers (2022-04-16T08:07:53Z) - Annotation and Classification of Evidence and Reasoning Revisions in
Argumentative Writing [0.9449650062296824]
We introduce an annotation scheme to capture the nature of sentence-level revisions of evidence use and reasoning.
We show that reliable manual annotation can be achieved and that revision annotations correlate with a holistic assessment of essay improvement.
arXiv Detail & Related papers (2021-07-14T20:58:26Z) - Exploring Discourse Structures for Argument Impact Classification [48.909640432326654]
This paper empirically shows that the discourse relations between two arguments along the context path are essential factors for identifying the persuasive power of an argument.
We propose DisCOC to inject and fuse the sentence-level structural information with contextualized features derived from large-scale language models.
arXiv Detail & Related papers (2021-06-02T06:49:19Z) - An Exploratory Study of Argumentative Writing by Young Students: A
Transformer-based Approach [10.541633715913514]
We present a computational exploration of argument critique writing by young students.
Middle school students were asked to criticize an argument presented in the prompt, focusing on identifying and explaining the reasoning flaws.
This task resembles an established college-level argument critique task.
arXiv Detail & Related papers (2020-06-17T13:55:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.