Beyond Text Generation: Supporting Writers with Continuous Automatic
Text Summaries
- URL: http://arxiv.org/abs/2208.09323v1
- Date: Fri, 19 Aug 2022 13:09:56 GMT
- Title: Beyond Text Generation: Supporting Writers with Continuous Automatic
Text Summaries
- Authors: Hai Dang, Karim Benharrak, Florian Lehmann, Daniel Buschek
- Abstract summary: We propose a text editor to help users plan, structure and reflect on their writing process.
It provides continuously updated paragraph-wise summaries as margin annotations, using automatic text summarization.
- Score: 27.853155569154705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a text editor to help users plan, structure and reflect on their
writing process. It provides continuously updated paragraph-wise summaries as
margin annotations, using automatic text summarization. Summary levels range
from full text, to selected (central) sentences, down to a collection of
keywords. To understand how users interact with this system during writing, we
conducted two user studies (N=4 and N=8) in which people wrote analytic essays
about a given topic and article. As a key finding, the summaries gave users an
external perspective on their writing and helped them to revise the content and
scope of their drafted paragraphs. People further used the tool to quickly gain
an overview of the text and developed strategies to integrate insights from the
automated summaries. More broadly, this work explores and highlights the value
of designing AI tools for writers, with Natural Language Processing (NLP)
capabilities that go beyond direct text generation and correction.
Related papers
- Collage is the New Writing: Exploring the Fragmentation of Text and User Interfaces in AI Tools [24.71214613787985]
The essay employs Collage as an analytical lens to analyse the user interface design of recent AI writing tools.
A critical perspective relates the concerns that writers historically expressed through literary collage to AI writing tools.
arXiv Detail & Related papers (2024-05-27T14:35:17Z) - Navigating the Path of Writing: Outline-guided Text Generation with Large Language Models [8.920436030483872]
We propose Writing Path, a framework that uses explicit outlines to guide Large Language Models (LLMs) in generating user-aligned text.
Our approach draws inspiration from structured writing planning and reasoning paths, focusing on capturing and reflecting user intentions throughout the writing process.
arXiv Detail & Related papers (2024-04-22T06:57:43Z) - Towards Full Authorship with AI: Supporting Revision with AI-Generated
Views [3.109675063162349]
Large language models (LLMs) are shaping a new user interface (UI) paradigm in writing tools by enabling users to generate text through prompts.
This paradigm shifts some creative control from the user to the system, thereby diminishing the user's authorship and autonomy in the writing process.
We introduce Textfocals, a prototype designed to investigate a human-centered approach that emphasizes the user's role in writing.
arXiv Detail & Related papers (2024-03-02T01:11:35Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - VISAR: A Human-AI Argumentative Writing Assistant with Visual
Programming and Rapid Draft Prototyping [13.023911633052482]
VISAR is an AI-enabled writing assistant system designed to help writers brainstorm and revise hierarchical goals within their writing context.
It organizes argument structures through synchronized text editing and visual programming, and enhances persuasiveness with argumentation spark recommendations.
A controlled lab study confirmed the usability and effectiveness of VISAR in facilitating the argumentative writing planning process.
arXiv Detail & Related papers (2023-04-16T15:29:03Z) - SCROLLS: Standardized CompaRison Over Long Language Sequences [62.574959194373264]
We introduce SCROLLS, a suite of tasks that require reasoning over long texts.
SCROLLS contains summarization, question answering, and natural language inference tasks.
We make all datasets available in a unified text-to-text format and host a live leaderboard to facilitate research on model architecture and pretraining methods.
arXiv Detail & Related papers (2022-01-10T18:47:15Z) - Unsupervised Summarization for Chat Logs with Topic-Oriented Ranking and
Context-Aware Auto-Encoders [59.038157066874255]
We propose a novel framework called RankAE to perform chat summarization without employing manually labeled data.
RankAE consists of a topic-oriented ranking strategy that selects topic utterances according to centrality and diversity simultaneously.
A denoising auto-encoder is designed to generate succinct but context-informative summaries based on the selected utterances.
arXiv Detail & Related papers (2020-12-14T07:31:17Z) - Abstractive Summarization of Spoken and Written Instructions with BERT [66.14755043607776]
We present the first application of the BERTSum model to conversational language.
We generate abstractive summaries of narrated instructional videos across a wide variety of topics.
We envision this integrated as a feature in intelligent virtual assistants, enabling them to summarize both written and spoken instructional content upon request.
arXiv Detail & Related papers (2020-08-21T20:59:34Z) - TRIE: End-to-End Text Reading and Information Extraction for Document
Understanding [56.1416883796342]
We propose a unified end-to-end text reading and information extraction network.
multimodal visual and textual features of text reading are fused for information extraction.
Our proposed method significantly outperforms the state-of-the-art methods in both efficiency and accuracy.
arXiv Detail & Related papers (2020-05-27T01:47:26Z) - Enabling Language Models to Fill in the Blanks [81.59381915581892]
We present a simple approach for text infilling, the task of predicting missing spans of text at any position in a document.
We train (or fine-tune) off-the-shelf language models on sequences containing the concatenation of artificially-masked text and the text which was masked.
We show that this approach, which we call infilling by language modeling, can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics.
arXiv Detail & Related papers (2020-05-11T18:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.