EIPE-text: Evaluation-Guided Iterative Plan Extraction for Long-Form
Narrative Text Generation
- URL: http://arxiv.org/abs/2310.08185v1
- Date: Thu, 12 Oct 2023 10:21:37 GMT
- Title: EIPE-text: Evaluation-Guided Iterative Plan Extraction for Long-Form
Narrative Text Generation
- Authors: Wang You, Wenshan Wu, Yaobo Liang, Shaoguang Mao, Chenfei Wu, Maosong
Cao, Yuzhe Cai, Yiduo Guo, Yan Xia, Furu Wei, Nan Duan
- Abstract summary: We propose a new framework called Evaluation-guided Iterative Plan Extraction for long-form narrative text generation (EIPE-text)
EIPE-text has three stages: plan extraction, learning, and inference.
We evaluate the effectiveness of EIPE-text in the domains of novels and storytelling.
- Score: 114.50719922069261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Plan-and-Write is a common hierarchical approach in long-form narrative text
generation, which first creates a plan to guide the narrative writing.
Following this approach, several studies rely on simply prompting large
language models for planning, which often yields suboptimal results. In this
paper, we propose a new framework called Evaluation-guided Iterative Plan
Extraction for long-form narrative text generation (EIPE-text), which extracts
plans from the corpus of narratives and utilizes the extracted plans to
construct a better planner. EIPE-text has three stages: plan extraction,
learning, and inference. In the plan extraction stage, it iteratively extracts
and improves plans from the narrative corpus and constructs a plan corpus. We
propose a question answer (QA) based evaluation mechanism to automatically
evaluate the plans and generate detailed plan refinement instructions to guide
the iterative improvement. In the learning stage, we build a better planner by
fine-tuning with the plan corpus or in-context learning with examples in the
plan corpus. Finally, we leverage a hierarchical approach to generate long-form
narratives. We evaluate the effectiveness of EIPE-text in the domains of novels
and storytelling. Both GPT-4-based evaluations and human evaluations
demonstrate that our method can generate more coherent and relevant long-form
narratives. Our code will be released in the future.
Related papers
- Analysis of Plan-based Retrieval for Grounded Text Generation [78.89478272104739]
hallucinations occur when a language model is given a generation task outside its parametric knowledge.
A common strategy to address this limitation is to infuse the language models with retrieval mechanisms.
We analyze how planning can be used to guide retrieval to further reduce the frequency of hallucinations.
arXiv Detail & Related papers (2024-08-20T02:19:35Z) - Exploring and Benchmarking the Planning Capabilities of Large Language Models [57.23454975238014]
This work lays the foundations for improving planning capabilities of large language models (LLMs)
We construct a comprehensive benchmark suite encompassing both classical planning benchmarks and natural language scenarios.
We investigate the use of many-shot in-context learning to enhance LLM planning, exploring the relationship between increased context length and improved planning performance.
arXiv Detail & Related papers (2024-06-18T22:57:06Z) - Socratic Planner: Inquiry-Based Zero-Shot Planning for Embodied Instruction Following [17.608330952846075]
Embodied Instruction Following (EIF) is the task of executing natural language instructions by navigating and interacting with objects in 3D environments.
One of the primary challenges in EIF is compositional task planning, which is often addressed with supervised or in-context learning with labeled data.
We introduce the Socratic Planner, the first zero-shot planning method that infers without the need for any training data.
arXiv Detail & Related papers (2024-04-21T08:10:20Z) - PARADISE: Evaluating Implicit Planning Skills of Language Models with Procedural Warnings and Tips Dataset [0.0]
We present PARADISE, an abductive reasoning task using Q&A format on practical procedural text sourced from wikiHow.
It involves warning and tip inference tasks directly associated with goals, excluding intermediary steps, with the aim of testing the ability of the models to infer implicit knowledge of the plan solely from the given goal.
Our experiments, utilizing fine-tuned language models and zero-shot prompting, reveal the effectiveness of task-specific small models over large language models in most scenarios.
arXiv Detail & Related papers (2024-03-05T18:01:59Z) - Little Red Riding Hood Goes Around the Globe:Crosslingual Story Planning and Generation with Large Language Models [69.60579227637399]
Previous work has demonstrated the effectiveness of planning for story generation exclusively in a monolingual setting focusing primarily on English.
We propose a new task of cross-lingual story generation with planning and present a new dataset for this task.
arXiv Detail & Related papers (2022-12-20T17:42:16Z) - Data-to-text Generation with Variational Sequential Planning [74.3955521225497]
We consider the task of data-to-text generation, which aims to create textual output from non-linguistic input.
We propose a neural model enhanced with a planning component responsible for organizing high-level information in a coherent and meaningful way.
We infer latent plans sequentially with a structured variational model, while interleaving the steps of planning and generation.
arXiv Detail & Related papers (2022-02-28T13:17:59Z) - Summarize, Outline, and Elaborate: Long-Text Generation via Hierarchical
Supervision from Extractive Summaries [46.183289748907804]
We propose SOE, a pipelined system that outlines, outlining and elaborating for long text generation.
SOE produces long texts with significantly better quality, along with faster convergence speed.
arXiv Detail & Related papers (2020-10-14T13:22:20Z) - Plan ahead: Self-Supervised Text Planning for Paragraph Completion Task [14.483791451578007]
We propose a self-supervised text planner SSPlanner that predicts what to say first.
It then guides the pretrained language model (surface realization) using the predicted content.
We also find that a combination of noun and verb types of keywords is the most effective for content selection.
arXiv Detail & Related papers (2020-10-11T02:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.