IGA : An Intent-Guided Authoring Assistant
- URL: http://arxiv.org/abs/2104.07000v1
- Date: Wed, 14 Apr 2021 17:32:21 GMT
- Title: IGA : An Intent-Guided Authoring Assistant
- Authors: Simeng Sun, Wenlong Zhao, Varun Manjunatha, Rajiv Jain, Vlad Morariu,
Franck Dernoncourt, Balaji Vasan Srinivasan, Mohit Iyyer
- Abstract summary: We leverage advances in language modeling to build an interactive writing assistant that generates and rephrases text according to author specifications.
Users provide input to our Intent-Guided Assistant (IGA) in the form of text interspersed with tags that correspond to specific rhetorical directives.
We fine-tune a language model on a datasetally-labeled with author intent, which allows IGA to fill in these tags with generated text that users can subsequently edit to their liking.
- Score: 37.98368621931934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While large-scale pretrained language models have significantly improved
writing assistance functionalities such as autocomplete, more complex and
controllable writing assistants have yet to be explored. We leverage advances
in language modeling to build an interactive writing assistant that generates
and rephrases text according to fine-grained author specifications. Users
provide input to our Intent-Guided Assistant (IGA) in the form of text
interspersed with tags that correspond to specific rhetorical directives (e.g.,
adding description or contrast, or rephrasing a particular sentence). We
fine-tune a language model on a dataset heuristically-labeled with author
intent, which allows IGA to fill in these tags with generated text that users
can subsequently edit to their liking. A series of automatic and crowdsourced
evaluations confirm the quality of IGA's generated outputs, while a small-scale
user study demonstrates author preference for IGA over baseline methods in a
creative writing task. We release our dataset, code, and demo to spur further
research into AI-assisted writing.
Related papers
- Capturing Style in Author and Document Representation [4.323709559692927]
We propose a new architecture that learns embeddings for both authors and documents with a stylistic constraint.
We evaluate our method on three datasets: a literary corpus extracted from the Gutenberg Project, the Blog Authorship and IMDb62.
arXiv Detail & Related papers (2024-07-18T10:01:09Z) - Towards Full Authorship with AI: Supporting Revision with AI-Generated
Views [3.109675063162349]
Large language models (LLMs) are shaping a new user interface (UI) paradigm in writing tools by enabling users to generate text through prompts.
This paradigm shifts some creative control from the user to the system, thereby diminishing the user's authorship and autonomy in the writing process.
We introduce Textfocals, a prototype designed to investigate a human-centered approach that emphasizes the user's role in writing.
arXiv Detail & Related papers (2024-03-02T01:11:35Z) - Answer is All You Need: Instruction-following Text Embedding via
Answering the Question [41.727700155498546]
This paper offers a new viewpoint, which treats the instruction as a question about the input text and encodes the expected answers to obtain the representation accordingly.
Specifically, we propose InBedder that instantiates this embed-via-answering idea by only fine-tuning language models on abstractive question answering tasks.
arXiv Detail & Related papers (2024-02-15T01:02:41Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - Measuring Annotator Agreement Generally across Complex Structured,
Multi-object, and Free-text Annotation Tasks [79.24863171717972]
Inter-annotator agreement (IAA) is a key metric for quality assurance.
Measures exist for simple categorical and ordinal labeling tasks, but little work has considered more complex labeling tasks.
Krippendorff's alpha, best known for use with simpler labeling tasks, does have a distance-based formulation with broader applicability.
arXiv Detail & Related papers (2022-12-15T20:12:48Z) - Beyond Text Generation: Supporting Writers with Continuous Automatic
Text Summaries [27.853155569154705]
We propose a text editor to help users plan, structure and reflect on their writing process.
It provides continuously updated paragraph-wise summaries as margin annotations, using automatic text summarization.
arXiv Detail & Related papers (2022-08-19T13:09:56Z) - Effidit: Your AI Writing Assistant [60.588370965898534]
Effidit is a digital writing assistant that facilitates users to write higher-quality text more efficiently by using artificial intelligence (AI) technologies.
In Effidit, we significantly expand the capacities of a writing assistant by providing functions in five categories: text completion, error checking, text polishing, keywords to sentences (K2S), and cloud input methods (cloud IME)
arXiv Detail & Related papers (2022-08-03T02:24:45Z) - CoAuthor: Designing a Human-AI Collaborative Writing Dataset for
Exploring Language Model Capabilities [92.79451009324268]
We present CoAuthor, a dataset designed for revealing GPT-3's capabilities in assisting creative and argumentative writing.
We demonstrate that CoAuthor can address questions about GPT-3's language, ideation, and collaboration capabilities.
We discuss how this work may facilitate a more principled discussion around LMs' promises and pitfalls in relation to interaction design.
arXiv Detail & Related papers (2022-01-18T07:51:57Z) - DRAG: Director-Generator Language Modelling Framework for Non-Parallel
Author Stylized Rewriting [9.275464023441227]
Author stylized rewriting is the task of rewriting an input text in a particular author's style.
We propose a Director-Generator framework to rewrite content in the target author's style.
arXiv Detail & Related papers (2021-01-28T06:52:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.