ABScribe: Rapid Exploration & Organization of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large Language Models
- URL: http://arxiv.org/abs/2310.00117v4
- Date: Wed, 27 Mar 2024 13:38:00 GMT
- Title: ABScribe: Rapid Exploration & Organization of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large Language Models
- Authors: Mohi Reza, Nathan Laundry, Ilya Musabirov, Peter Dushniku, Zhi Yuan "Michael" Yu, Kashish Mittal, Tovi Grossman, Michael Liut, Anastasia Kuzminykh, Joseph Jay Williams,
- Abstract summary: We present ABScribe, an interface that supports rapid, yet visually structured, exploration and organization of writing variations.
With ABScribe, users can swiftly modify variations using LLM prompts, which are auto-converted into reusable buttons.
Variations are stored adjacently within text fields for rapid in-place comparisons using mouse-over interactions on a popup toolbar.
- Score: 24.825435085579937
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Exploring alternative ideas by rewriting text is integral to the writing process. State-of-the-art Large Language Models (LLMs) can simplify writing variation generation. However, current interfaces pose challenges for simultaneous consideration of multiple variations: creating new variations without overwriting text can be difficult, and pasting them sequentially can clutter documents, increasing workload and disrupting writers' flow. To tackle this, we present ABScribe, an interface that supports rapid, yet visually structured, exploration and organization of writing variations in human-AI co-writing tasks. With ABScribe, users can swiftly modify variations using LLM prompts, which are auto-converted into reusable buttons. Variations are stored adjacently within text fields for rapid in-place comparisons using mouse-over interactions on a popup toolbar. Our user study with 12 writers shows that ABScribe significantly reduces task workload (d = 1.20, p < 0.001), enhances user perceptions of the revision process (d = 2.41, p < 0.001) compared to a popular baseline workflow, and provides insights into how writers explore variations using LLMs.
Related papers
- A Bounding Box is Worth One Token: Interleaving Layout and Text in a Large Language Model for Document Understanding [30.754200683466788]
We introduce Interleaving Layout and Text in a Large Language Model (LayTextLLM) for document understanding.
LayTextLLM projects each bounding box to a single embedding and interleaves it with text, efficiently avoiding long sequence issues.
It also shows enhanced performance in Key Information Extraction (KIE) and Visual Question Answering (VQA)
arXiv Detail & Related papers (2024-07-02T06:29:05Z) - Towards Full Authorship with AI: Supporting Revision with AI-Generated
Views [3.109675063162349]
Large language models (LLMs) are shaping a new user interface (UI) paradigm in writing tools by enabling users to generate text through prompts.
This paradigm shifts some creative control from the user to the system, thereby diminishing the user's authorship and autonomy in the writing process.
We introduce Textfocals, a prototype designed to investigate a human-centered approach that emphasizes the user's role in writing.
arXiv Detail & Related papers (2024-03-02T01:11:35Z) - RewriteLM: An Instruction-Tuned Large Language Model for Text Rewriting [11.306772273707253]
Large Language Models (LLMs) have demonstrated impressive capabilities in creative tasks such as storytelling and E-mail generation.
We develop new strategies for instruction tuning and reinforcement learning to better align LLMs for cross-sentence rewriting tasks.
OpenRewriteEval, a novel benchmark covers a wide variety of rewriting types expressed through natural language instructions.
arXiv Detail & Related papers (2023-05-25T03:26:26Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - PEER: A Collaborative Language Model [70.11876901409906]
We introduce PEER, a collaborative language model that imitates the entire writing process itself.
PEER can write drafts, add suggestions, propose edits and provide explanations for its actions.
We show that PEER achieves strong performance across various domains and editing tasks.
arXiv Detail & Related papers (2022-08-24T16:56:47Z) - Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation
with Large Language Models [116.25562358482962]
State-of-the-art neural language models can be used to solve ad-hoc language tasks without the need for supervised training.
PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts.
arXiv Detail & Related papers (2022-08-16T17:17:53Z) - Composable Text Controls in Latent Space with ODEs [97.12426987887021]
This paper proposes a new efficient approach for composable text operations in the compact latent space of text.
By connecting pretrained LMs to the latent space through efficient adaption, we then decode the sampled vectors into desired text sequences.
Experiments show that composing those operators within our approach manages to generate or edit high-quality text.
arXiv Detail & Related papers (2022-08-01T06:51:45Z) - Letter-level Online Writer Identification [86.13203975836556]
We focus on a novel problem, letter-level online writer-id, which requires only a few trajectories of written letters as identification cues.
A main challenge is that a person often writes a letter in different styles from time to time.
We refer to this problem as the variance of online writing styles (Var-O-Styles)
arXiv Detail & Related papers (2021-12-06T07:21:53Z) - Enabling Language Models to Fill in the Blanks [81.59381915581892]
We present a simple approach for text infilling, the task of predicting missing spans of text at any position in a document.
We train (or fine-tune) off-the-shelf language models on sequences containing the concatenation of artificially-masked text and the text which was masked.
We show that this approach, which we call infilling by language modeling, can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics.
arXiv Detail & Related papers (2020-05-11T18:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.