GROVE: A Retrieval-augmented Complex Story Generation Framework with A
Forest of Evidence
- URL: http://arxiv.org/abs/2310.05388v2
- Date: Tue, 24 Oct 2023 01:37:46 GMT
- Title: GROVE: A Retrieval-augmented Complex Story Generation Framework with A
Forest of Evidence
- Authors: Zhihua Wen, Zhiliang Tian, Wei Wu, Yuxin Yang, Yanqi Shi, Zhen Huang,
Dongsheng Li
- Abstract summary: We propose a retrieval-autextbfGmented stotextbfRy generation framework with a ftextbfOrest of etextbfVidtextbfEnce (GROVE) to enhance stories' complexity.
We design an asking-why'' prompting scheme that extracts a forest of evidence, providing compensation for the ambiguities that may occur in the generated story.
- Score: 26.90143556633735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conditional story generation is significant in human-machine interaction,
particularly in producing stories with complex plots. While Large language
models (LLMs) perform well on multiple NLP tasks, including story generation,
it is challenging to generate stories with both complex and creative plots.
Existing methods often rely on detailed prompts to guide LLMs to meet target
conditions, which inadvertently restrict the creative potential of the
generated stories. We argue that leveraging information from exemplary
human-written stories facilitates generating more diverse plotlines. Delving
deeper into story details helps build complex and credible plots. In this
paper, we propose a retrieval-au\textbf{G}mented sto\textbf{R}y generation
framework with a f\textbf{O}rest of e\textbf{V}id\textbf{E}nce (GROVE) to
enhance stories' complexity. We build a retrieval repository for target
conditions to produce few-shot examples to prompt LLMs. Additionally, we design
an ``asking-why'' prompting scheme that extracts a forest of evidence,
providing compensation for the ambiguities that may occur in the generated
story. This iterative process uncovers underlying story backgrounds. Finally,
we select the most fitting chains of evidence from the evidence forest and
integrate them into the generated story, thereby enhancing the narrative's
complexity and credibility. Experimental results and numerous examples verify
the effectiveness of our method.
Related papers
- Agents' Room: Narrative Generation through Multi-step Collaboration [54.98886593802834]
We propose a generation framework inspired by narrative theory that decomposes narrative writing into subtasks tackled by specialized agents.
We show that Agents' Room generates stories preferred by expert evaluators over those produced by baseline systems.
arXiv Detail & Related papers (2024-10-03T15:44:42Z) - Generating Visual Stories with Grounded and Coreferent Characters [63.07511918366848]
We present the first model capable of predicting visual stories with consistently grounded and coreferent character mentions.
Our model is finetuned on a new dataset which we build on top of the widely used VIST benchmark.
We also propose new evaluation metrics to measure the richness of characters and coreference in stories.
arXiv Detail & Related papers (2024-09-20T14:56:33Z) - DataNarrative: Automated Data-Driven Storytelling with Visualizations and Texts [27.218934418961197]
We introduce a novel task for data story generation and a benchmark containing 1,449 stories from diverse sources.
To address the challenges of crafting coherent data stories, we propose a multiagent framework employing two LLM agents.
While our agentic framework generally outperforms non-agentic counterparts in both model-based and human evaluations, the results also reveal unique challenges in data story generation.
arXiv Detail & Related papers (2024-08-09T21:31:33Z) - MoPS: Modular Story Premise Synthesis for Open-Ended Automatic Story Generation [50.01780173691132]
We introduce Modular Story Premise Synthesis (MoPS)
MoPS breaks down story premises into modules like background and persona for automated design and generation.
Thorough evaluations demonstrate that our synthesized premises excel in diversity, fascination, completeness, and originality.
arXiv Detail & Related papers (2024-06-09T08:31:14Z) - Guiding and Diversifying LLM-Based Story Generation via Answer Set Programming [1.7889842797216124]
Large language models (LLMs) are capable of generating stories in response to open-ended user requests.
We propose using a higher-level and more abstract symbolic specification of high-level story structure to guide and diversify story generation.
arXiv Detail & Related papers (2024-06-01T21:14:25Z) - GENEVA: GENErating and Visualizing branching narratives using LLMs [15.43734266732214]
textbfGENEVA, a prototype tool, generates a rich narrative graph with branching and reconverging storylines.
textbfGENEVA has the potential to assist in game development, simulations, and other applications with game-like properties.
arXiv Detail & Related papers (2023-11-15T18:55:45Z) - StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story
Continuation [76.44802273236081]
We develop a model StoryDALL-E for story continuation, where the generated visual story is conditioned on a source image.
We show that our retro-fitting approach outperforms GAN-based models for story continuation and facilitates copying of visual elements from the source image.
Overall, our work demonstrates that pretrained text-to-image synthesis models can be adapted for complex and low-resource tasks like story continuation.
arXiv Detail & Related papers (2022-09-13T17:47:39Z) - Event Transition Planning for Open-ended Text Generation [55.729259805477376]
Open-ended text generation tasks require models to generate a coherent continuation given limited preceding context.
We propose a novel two-stage method which explicitly arranges the ensuing events in open-ended text generation.
Our approach can be understood as a specially-trained coarse-to-fine algorithm.
arXiv Detail & Related papers (2022-04-20T13:37:51Z) - Incorporating Commonsense Knowledge into Story Ending Generation via
Heterogeneous Graph Networks [16.360265861788253]
We propose a Story Heterogeneous Graph Network (SHGN) to explicitly model both the information of story context at different levels and the multi-grained interactive relations among them.
In detail, we consider commonsense knowledge, words and sentences as three types of nodes.
We design two auxiliary tasks to implicitly capture the sentiment trend and key events lie in the context.
arXiv Detail & Related papers (2022-01-29T09:33:11Z) - A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation [98.25464306634758]
We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories.
We employ multi-task learning which combines a discriminative objective to distinguish true and fake stories.
Our model can generate more reasonable stories than state-of-the-art baselines, particularly in terms of logic and global coherence.
arXiv Detail & Related papers (2020-01-15T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.