To Revise or Not to Revise: Learning to Detect Improvable Claims for
Argumentative Writing Support
- URL: http://arxiv.org/abs/2305.16799v1
- Date: Fri, 26 May 2023 10:19:54 GMT
- Title: To Revise or Not to Revise: Learning to Detect Improvable Claims for
Argumentative Writing Support
- Authors: Gabriella Skitalinskaya and Henning Wachsmuth
- Abstract summary: We explore the main challenges to identifying argumentative claims in need of specific revisions.
We propose a new sampling strategy based on revision distance.
We provide evidence that using contextual information and domain knowledge can further improve prediction results.
- Score: 20.905660642919052
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optimizing the phrasing of argumentative text is crucial in higher education
and professional development. However, assessing whether and how the different
claims in a text should be revised is a hard task, especially for novice
writers. In this work, we explore the main challenges to identifying
argumentative claims in need of specific revisions. By learning from
collaborative editing behaviors in online debates, we seek to capture implicit
revision patterns in order to develop approaches aimed at guiding writers in
how to further improve their arguments. We systematically compare the ability
of common word embedding models to capture the differences between different
versions of the same text, and we analyze their impact on various types of
writing issues. To deal with the noisy nature of revision-based corpora, we
propose a new sampling strategy based on revision distance. Opposed to
approaches from prior work, such sampling can be done without employing
additional annotations and judgments. Moreover, we provide evidence that using
contextual information and domain knowledge can further improve prediction
results. How useful a certain type of context is, depends on the issue the
claim is suffering from, though.
Related papers
- CASIMIR: A Corpus of Scientific Articles enhanced with Multiple Author-Integrated Revisions [7.503795054002406]
We propose an original textual resource on the revision step of the writing process of scientific articles.
This new dataset, called CASIMIR, contains the multiple revised versions of 15,646 scientific articles from OpenReview, along with their peer reviews.
arXiv Detail & Related papers (2024-03-01T03:07:32Z) - How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - SCREWS: A Modular Framework for Reasoning with Revisions [58.698199183147935]
We present SCREWS, a modular framework for reasoning with revisions.
We show that SCREWS unifies several previous approaches under a common framework.
We evaluate our framework with state-of-the-art LLMs on a diverse set of reasoning tasks.
arXiv Detail & Related papers (2023-09-20T15:59:54Z) - Verifying the Robustness of Automatic Credibility Assessment [79.08422736721764]
Text classification methods have been widely investigated as a way to detect content of low credibility.
In some cases insignificant changes in input text can mislead the models.
We introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Improving Iterative Text Revision by Learning Where to Edit from Other
Revision Tasks [11.495407637511878]
Iterative text revision improves text quality by fixing grammatical errors, rephrasing for better readability or contextual appropriateness, or reorganizing sentence structures throughout a document.
Most recent research has focused on understanding and classifying different types of edits in the iterative revision process from human-written text.
We aim to build an end-to-end text revision system that can iteratively generate helpful edits by explicitly detecting editable spans with their corresponding edit intents.
arXiv Detail & Related papers (2022-12-02T18:10:43Z) - EditEval: An Instruction-Based Benchmark for Text Improvements [73.5918084416016]
This work presents EditEval: An instruction-based, benchmark and evaluation suite for automatic evaluation of editing capabilities.
We evaluate several pre-trained models, which shows that InstructGPT and PEER perform the best, but that most baselines fall below the supervised SOTA.
Our analysis shows that commonly used metrics for editing tasks do not always correlate well, and that optimization for prompts with the highest performance does not necessarily entail the strongest robustness to different models.
arXiv Detail & Related papers (2022-09-27T12:26:05Z) - Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review [52.359007622096684]
Peer review is a key component of the publishing process in most fields of science.
Existing NLP studies focus on the analysis of individual texts.
editorial assistance often requires modeling interactions between pairs of texts.
arXiv Detail & Related papers (2022-04-22T16:39:38Z) - Read, Revise, Repeat: A System Demonstration for Human-in-the-loop
Iterative Text Revision [11.495407637511878]
We present a human-in-the-loop iterative text revision system, Read, Revise, Repeat (R3)
R3 aims at achieving high quality text revisions with minimal human efforts by reading model-generated revisions and user feedbacks, revising documents, and repeating human-machine interactions.
arXiv Detail & Related papers (2022-04-07T18:33:10Z) - Understanding Iterative Revision from Human-Written Text [10.714872525208385]
IteraTeR is the first large-scale, multi-domain, edit-intention annotated corpus of iteratively revised text.
We better understand the text revision process, making vital connections between edit intentions and writing quality.
arXiv Detail & Related papers (2022-03-08T01:47:42Z) - Comprehensive Studies for Arbitrary-shape Scene Text Detection [78.50639779134944]
We propose a unified framework for the bottom-up based scene text detection methods.
Under the unified framework, we ensure the consistent settings for non-core modules.
With the comprehensive investigations and elaborate analyses, it reveals the advantages and disadvantages of previous models.
arXiv Detail & Related papers (2021-07-25T13:18:55Z) - Annotation and Classification of Evidence and Reasoning Revisions in
Argumentative Writing [0.9449650062296824]
We introduce an annotation scheme to capture the nature of sentence-level revisions of evidence use and reasoning.
We show that reliable manual annotation can be achieved and that revision annotations correlate with a holistic assessment of essay improvement.
arXiv Detail & Related papers (2021-07-14T20:58:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.