Read, Revise, Repeat: A System Demonstration for Human-in-the-loop
Iterative Text Revision
- URL: http://arxiv.org/abs/2204.03685v1
- Date: Thu, 7 Apr 2022 18:33:10 GMT
- Title: Read, Revise, Repeat: A System Demonstration for Human-in-the-loop
Iterative Text Revision
- Authors: Wanyu Du, Zae Myung Kim, Vipul Raheja, Dhruv Kumar, Dongyeop Kang
- Abstract summary: We present a human-in-the-loop iterative text revision system, Read, Revise, Repeat (R3)
R3 aims at achieving high quality text revisions with minimal human efforts by reading model-generated revisions and user feedbacks, revising documents, and repeating human-machine interactions.
- Score: 11.495407637511878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Revision is an essential part of the human writing process. It tends to be
strategic, adaptive, and, more importantly, iterative in nature. Despite the
success of large language models on text revision tasks, they are limited to
non-iterative, one-shot revisions. Examining and evaluating the capability of
large language models for making continuous revisions and collaborating with
human writers is a critical step towards building effective writing assistants.
In this work, we present a human-in-the-loop iterative text revision system,
Read, Revise, Repeat (R3), which aims at achieving high quality text revisions
with minimal human efforts by reading model-generated revisions and user
feedbacks, revising documents, and repeating human-machine interactions. In R3,
a text revision model provides text editing suggestions for human writers, who
can accept or reject the suggested edits. The accepted edits are then
incorporated into the model for the next iteration of document revision.
Writers can therefore revise documents iteratively by interacting with the
system and simply accepting/rejecting its suggested edits until the text
revision model stops making further revisions or reaches a predefined maximum
number of revisions. Empirical experiments show that R3 can generate revisions
with comparable acceptance rate to human writers at early revision depths, and
the human-machine interaction can get higher quality revisions with fewer
iterations and edits. The collected human-model interaction dataset and system
code are available at \url{https://github.com/vipulraheja/IteraTeR}. Our system
demonstration is available at \url{https://youtu.be/lK08tIpEoaE}.
Related papers
- Re3: A Holistic Framework and Dataset for Modeling Collaborative Document Revision [62.12545440385489]
We introduce Re3, a framework for joint analysis of collaborative document revision.
We present Re3-Sci, a large corpus of aligned scientific paper revisions manually labeled according to their action and intent.
We use the new data to provide first empirical insights into collaborative document revision in the academic domain.
arXiv Detail & Related papers (2024-05-31T21:19:09Z) - SCREWS: A Modular Framework for Reasoning with Revisions [58.698199183147935]
We present SCREWS, a modular framework for reasoning with revisions.
We show that SCREWS unifies several previous approaches under a common framework.
We evaluate our framework with state-of-the-art LLMs on a diverse set of reasoning tasks.
arXiv Detail & Related papers (2023-09-20T15:59:54Z) - To Revise or Not to Revise: Learning to Detect Improvable Claims for
Argumentative Writing Support [20.905660642919052]
We explore the main challenges to identifying argumentative claims in need of specific revisions.
We propose a new sampling strategy based on revision distance.
We provide evidence that using contextual information and domain knowledge can further improve prediction results.
arXiv Detail & Related papers (2023-05-26T10:19:54Z) - Pay Attention to Your Tone: Introducing a New Dataset for Polite
Language Rewrite [81.83910117028464]
We introduce textscPoliteRewrite -- a dataset for polite language rewrite.
TenK polite sentence rewrites annotated collaboratively by GPT-3.5 and human.
100K high-quality polite sentence rewrites by GPT-3.5 without human review.
arXiv Detail & Related papers (2022-12-20T12:02:34Z) - Improving Iterative Text Revision by Learning Where to Edit from Other
Revision Tasks [11.495407637511878]
Iterative text revision improves text quality by fixing grammatical errors, rephrasing for better readability or contextual appropriateness, or reorganizing sentence structures throughout a document.
Most recent research has focused on understanding and classifying different types of edits in the iterative revision process from human-written text.
We aim to build an end-to-end text revision system that can iteratively generate helpful edits by explicitly detecting editable spans with their corresponding edit intents.
arXiv Detail & Related papers (2022-12-02T18:10:43Z) - EditEval: An Instruction-Based Benchmark for Text Improvements [73.5918084416016]
This work presents EditEval: An instruction-based, benchmark and evaluation suite for automatic evaluation of editing capabilities.
We evaluate several pre-trained models, which shows that InstructGPT and PEER perform the best, but that most baselines fall below the supervised SOTA.
Our analysis shows that commonly used metrics for editing tasks do not always correlate well, and that optimization for prompts with the highest performance does not necessarily entail the strongest robustness to different models.
arXiv Detail & Related papers (2022-09-27T12:26:05Z) - PEER: A Collaborative Language Model [70.11876901409906]
We introduce PEER, a collaborative language model that imitates the entire writing process itself.
PEER can write drafts, add suggestions, propose edits and provide explanations for its actions.
We show that PEER achieves strong performance across various domains and editing tasks.
arXiv Detail & Related papers (2022-08-24T16:56:47Z) - ArgRewrite V.2: an Annotated Argumentative Revisions Corpus [10.65107335326471]
ArgRewrite V.2 is a corpus of annotated argumentative revisions collected from two cycles of revisions to argumentative essays about self-driving cars.
The variety of revision unit scope and purpose granularity levels in ArgRewrite, along with the inclusion of new types of meta-data, can make it a useful resource for research and applications that involve revision analysis.
arXiv Detail & Related papers (2022-06-03T16:40:51Z) - Towards Automated Document Revision: Grammatical Error Correction,
Fluency Edits, and Beyond [46.130399041820716]
We introduce a new document-revision corpus, TETRA, where professional editors revised academic papers sampled from the ACL anthology.
We show the uniqueness of TETRA compared with existing document revision corpora and demonstrate that a fine-tuned pre-trained language model can discriminate the quality of documents after revision even when the difference is subtle.
arXiv Detail & Related papers (2022-05-23T17:37:20Z) - Understanding Iterative Revision from Human-Written Text [10.714872525208385]
IteraTeR is the first large-scale, multi-domain, edit-intention annotated corpus of iteratively revised text.
We better understand the text revision process, making vital connections between edit intentions and writing quality.
arXiv Detail & Related papers (2022-03-08T01:47:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.