ArgRewrite V.2: an Annotated Argumentative Revisions Corpus
- URL: http://arxiv.org/abs/2206.01677v1
- Date: Fri, 3 Jun 2022 16:40:51 GMT
- Title: ArgRewrite V.2: an Annotated Argumentative Revisions Corpus
- Authors: Omid Kashefi, Tazin Afrin, Meghan Dale, Christopher Olshefski, Amanda
Godley, Diane Litman, Rebecca Hwa
- Abstract summary: ArgRewrite V.2 is a corpus of annotated argumentative revisions collected from two cycles of revisions to argumentative essays about self-driving cars.
The variety of revision unit scope and purpose granularity levels in ArgRewrite, along with the inclusion of new types of meta-data, can make it a useful resource for research and applications that involve revision analysis.
- Score: 10.65107335326471
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Analyzing how humans revise their writings is an interesting research
question, not only from an educational perspective but also in terms of
artificial intelligence. Better understanding of this process could facilitate
many NLP applications, from intelligent tutoring systems to supportive and
collaborative writing environments. Developing these applications, however,
requires revision corpora, which are not widely available. In this work, we
present ArgRewrite V.2, a corpus of annotated argumentative revisions,
collected from two cycles of revisions to argumentative essays about
self-driving cars. Annotations are provided at different levels of purpose
granularity (coarse and fine) and scope (sentential and subsentential). In
addition, the corpus includes the revision goal given to each writer, essay
scores, annotation verification, pre- and post-study surveys collected from
participants as meta-data. The variety of revision unit scope and purpose
granularity levels in ArgRewrite, along with the inclusion of new types of
meta-data, can make it a useful resource for research and applications that
involve revision analysis. We demonstrate some potential applications of
ArgRewrite V.2 in the development of automatic revision purpose predictors, as
a training source and benchmark.
Related papers
- Forging the Forger: An Attempt to Improve Authorship Verification via Data Augmentation [52.72682366640554]
Authorship Verification (AV) is a text classification task concerned with inferring whether a candidate text has been written by one specific author or by someone else.
It has been shown that many AV systems are vulnerable to adversarial attacks, where a malicious author actively tries to fool the classifier by either concealing their writing style, or by imitating the style of another author.
arXiv Detail & Related papers (2024-03-17T16:36:26Z) - CASIMIR: A Corpus of Scientific Articles enhanced with Multiple Author-Integrated Revisions [7.503795054002406]
We propose an original textual resource on the revision step of the writing process of scientific articles.
This new dataset, called CASIMIR, contains the multiple revised versions of 15,646 scientific articles from OpenReview, along with their peer reviews.
arXiv Detail & Related papers (2024-03-01T03:07:32Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - SCREWS: A Modular Framework for Reasoning with Revisions [58.698199183147935]
We present SCREWS, a modular framework for reasoning with revisions.
We show that SCREWS unifies several previous approaches under a common framework.
We evaluate our framework with state-of-the-art LLMs on a diverse set of reasoning tasks.
arXiv Detail & Related papers (2023-09-20T15:59:54Z) - To Revise or Not to Revise: Learning to Detect Improvable Claims for
Argumentative Writing Support [20.905660642919052]
We explore the main challenges to identifying argumentative claims in need of specific revisions.
We propose a new sampling strategy based on revision distance.
We provide evidence that using contextual information and domain knowledge can further improve prediction results.
arXiv Detail & Related papers (2023-05-26T10:19:54Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Read, Revise, Repeat: A System Demonstration for Human-in-the-loop
Iterative Text Revision [11.495407637511878]
We present a human-in-the-loop iterative text revision system, Read, Revise, Repeat (R3)
R3 aims at achieving high quality text revisions with minimal human efforts by reading model-generated revisions and user feedbacks, revising documents, and repeating human-machine interactions.
arXiv Detail & Related papers (2022-04-07T18:33:10Z) - Annotation and Classification of Evidence and Reasoning Revisions in
Argumentative Writing [0.9449650062296824]
We introduce an annotation scheme to capture the nature of sentence-level revisions of evidence use and reasoning.
We show that reliable manual annotation can be achieved and that revision annotations correlate with a holistic assessment of essay improvement.
arXiv Detail & Related papers (2021-07-14T20:58:26Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z) - Abstractive Summarization of Spoken and Written Instructions with BERT [66.14755043607776]
We present the first application of the BERTSum model to conversational language.
We generate abstractive summaries of narrated instructional videos across a wide variety of topics.
We envision this integrated as a feature in intelligent virtual assistants, enabling them to summarize both written and spoken instructional content upon request.
arXiv Detail & Related papers (2020-08-21T20:59:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.