Navigating the Path of Writing: Outline-guided Text Generation with Large Language Models
- URL: http://arxiv.org/abs/2404.13919v1
- Date: Mon, 22 Apr 2024 06:57:43 GMT
- Title: Navigating the Path of Writing: Outline-guided Text Generation with Large Language Models
- Authors: Yukyung Lee, Soonwon Ka, Bokyung Son, Pilsung Kang, Jaewook Kang,
- Abstract summary: We propose Writing Path, a framework that uses explicit outlines to guide Large Language Models (LLMs) in generating user-aligned text.
Our approach draws inspiration from structured writing planning and reasoning paths, focusing on capturing and reflecting user intentions throughout the writing process.
- Score: 8.920436030483872
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large Language Models (LLMs) have significantly impacted the writing process, enabling collaborative content creation and enhancing productivity. However, generating high-quality, user-aligned text remains challenging. In this paper, we propose Writing Path, a framework that uses explicit outlines to guide LLMs in generating goal-oriented, high-quality pieces of writing. Our approach draws inspiration from structured writing planning and reasoning paths, focusing on capturing and reflecting user intentions throughout the writing process. We construct a diverse dataset from unstructured blog posts to benchmark writing performance and introduce a comprehensive evaluation framework assessing the quality of outlines and generated texts. Our evaluations with GPT-3.5-turbo, GPT-4, and HyperCLOVA X demonstrate that the Writing Path approach significantly enhances text quality according to both LLMs and human evaluations. This study highlights the potential of integrating writing-specific techniques into LLMs to enhance their ability to meet the diverse writing needs of users.
Related papers
- Exploring Precision and Recall to assess the quality and diversity of LLMs [82.21278402856079]
We introduce a novel evaluation framework for Large Language Models (LLMs) such as textscLlama-2 and textscMistral.
This approach allows for a nuanced assessment of the quality and diversity of generated text without the need for aligned corpora.
arXiv Detail & Related papers (2024-02-16T13:53:26Z) - Evaluating Large Language Model Creativity from a Literary Perspective [13.672268920902187]
This paper assesses the potential for large language models to serve as assistive tools in the creative writing process.
We develop interactive and multi-voice prompting strategies that interleave background descriptions, instructions that guide composition, samples of text in the target style, and critical discussion of the given samples.
arXiv Detail & Related papers (2023-11-30T16:46:25Z) - Towards Improving Document Understanding: An Exploration on
Text-Grounding via MLLMs [96.54224331778195]
We present a text-grounding document understanding model, termed TGDoc, which enhances MLLMs with the ability to discern the spatial positioning of text within images.
We formulate instruction tuning tasks including text detection, recognition, and spotting to facilitate the cohesive alignment between the visual encoder and large language model.
Our method achieves state-of-the-art performance across multiple text-rich benchmarks, validating the effectiveness of our method.
arXiv Detail & Related papers (2023-11-22T06:46:37Z) - InternLM-XComposer: A Vision-Language Large Model for Advanced
Text-image Comprehension and Composition [111.65584066987036]
InternLM-XComposer is a vision-language large model that enables advanced image-text comprehension and composition.
It can effortlessly generate coherent and contextual articles that seamlessly integrate images.
It can intelligently identify the areas in the text where images would enhance the content and automatically insert the most appropriate visual candidates.
arXiv Detail & Related papers (2023-09-26T17:58:20Z) - Teach LLMs to Personalize -- An Approach inspired by Writing Education [37.198598706659524]
We propose a general approach for personalized text generation using large language models (LLMs)
Inspired by the practice of writing education, we develop a multistage and multitask framework to teach LLMs for personalized generation.
arXiv Detail & Related papers (2023-08-15T18:06:23Z) - Exploring the Use of Large Language Models for Reference-Free Text
Quality Evaluation: An Empirical Study [63.27346930921658]
ChatGPT is capable of evaluating text quality effectively from various perspectives without reference.
The Explicit Score, which utilizes ChatGPT to generate a numeric score measuring text quality, is the most effective and reliable method among the three exploited approaches.
arXiv Detail & Related papers (2023-04-03T05:29:58Z) - Decoding the End-to-end Writing Trajectory in Scholarly Manuscripts [7.294418916091011]
We introduce a novel taxonomy that categorizes scholarly writing behaviors according to intention, writer actions, and the information types of the written data.
Motivated by cognitive writing theory, our taxonomy for scientific papers includes three levels of categorization in order to trace the general writing flow.
ManuScript intends to provide a complete picture of the scholarly writing process by capturing the linearity and non-linearity of writing trajectory.
arXiv Detail & Related papers (2023-03-31T20:33:03Z) - Large Language Models are Diverse Role-Players for Summarization
Evaluation [82.31575622685902]
A document summary's quality can be assessed by human annotators on various criteria, both objective ones like grammar and correctness, and subjective ones like informativeness, succinctness, and appeal.
Most of the automatic evaluation methods like BLUE/ROUGE may be not able to adequately capture the above dimensions.
We propose a new evaluation framework based on LLMs, which provides a comprehensive evaluation framework by comparing generated text and reference text from both objective and subjective aspects.
arXiv Detail & Related papers (2023-03-27T10:40:59Z) - Beyond Text Generation: Supporting Writers with Continuous Automatic
Text Summaries [27.853155569154705]
We propose a text editor to help users plan, structure and reflect on their writing process.
It provides continuously updated paragraph-wise summaries as margin annotations, using automatic text summarization.
arXiv Detail & Related papers (2022-08-19T13:09:56Z) - CoAuthor: Designing a Human-AI Collaborative Writing Dataset for
Exploring Language Model Capabilities [92.79451009324268]
We present CoAuthor, a dataset designed for revealing GPT-3's capabilities in assisting creative and argumentative writing.
We demonstrate that CoAuthor can address questions about GPT-3's language, ideation, and collaboration capabilities.
We discuss how this work may facilitate a more principled discussion around LMs' promises and pitfalls in relation to interaction design.
arXiv Detail & Related papers (2022-01-18T07:51:57Z) - DRAG: Director-Generator Language Modelling Framework for Non-Parallel
Author Stylized Rewriting [9.275464023441227]
Author stylized rewriting is the task of rewriting an input text in a particular author's style.
We propose a Director-Generator framework to rewrite content in the target author's style.
arXiv Detail & Related papers (2021-01-28T06:52:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.