Crafting Narrative Closures: Zero-Shot Learning with SSM Mamba for Short Story Ending Generation
- URL: http://arxiv.org/abs/2410.10848v1
- Date: Fri, 04 Oct 2024 18:56:32 GMT
- Title: Crafting Narrative Closures: Zero-Shot Learning with SSM Mamba for Short Story Ending Generation
- Authors: Divyam Sharma, Divya Santhanam,
- Abstract summary: Authors encounter moments of creative block, where the path forward in their narrative becomes obscured.
This paper is designed to address such moments by providing an innovative solution: A tool that completes stories based on given prompts.
By inputting a short story prompt, users can receive a conclusion to their story, articulated in one sentence or more, thereby enhancing the storytelling process with AI-driven creativity.
- Score: 0.0
- License:
- Abstract: Writing stories is an engaging yet challenging endeavor. Often, authors encounter moments of creative block, where the path forward in their narrative becomes obscured. This paper is designed to address such moments by providing an innovative solution: A tool that completes stories based on given prompts. By inputting a short story prompt, users can receive a conclusion to their story, articulated in one sentence or more, thereby enhancing the storytelling process with AI-driven creativity. This tool aims not only to assist authors in navigating writer's block but also to offer a fun and interactive way for anyone to expand on story ideas spontaneously. Through this paper, we explore the intersection of artificial intelligence and creative writing, pushing the boundaries of how stories can be crafted and concluded. To create our final text-generation models, we used a pre-trained GPT-3.5 model and a newly created finetuned SSM-Mamba model, both of which perform well on a comprehensive list of metrics including BERT score, METEOR, BLEU, ROUGE, and Perplexity. The SSM model has also been made public for the NLP community on HuggingFace models as an open source contribution, which for the timebeing is a first of its kind state-space model for story-generation task on HuggingFace.
Related papers
- Agents' Room: Narrative Generation through Multi-step Collaboration [54.98886593802834]
We propose a generation framework inspired by narrative theory that decomposes narrative writing into subtasks tackled by specialized agents.
We show that Agents' Room generates stories preferred by expert evaluators over those produced by baseline systems.
arXiv Detail & Related papers (2024-10-03T15:44:42Z) - A Character-Centric Creative Story Generation via Imagination [15.345466372805516]
We introduce a novel story generation framework called CCI (Character-centric Creative story generation via Imagination)
CCI features two modules for creative story generation: IG (Image-Guided Imagination) and MW (Multi-Writer model)
In the IG module, we utilize a text-to-image model to create visual representations of key story elements, such as characters, backgrounds, and main plots.
The MW module uses these story elements to generate multiple persona-description candidates and selects the best one to insert into the story, thereby enhancing the richness and depth of the narrative.
arXiv Detail & Related papers (2024-09-25T06:54:29Z) - Generating Visual Stories with Grounded and Coreferent Characters [63.07511918366848]
We present the first model capable of predicting visual stories with consistently grounded and coreferent character mentions.
Our model is finetuned on a new dataset which we build on top of the widely used VIST benchmark.
We also propose new evaluation metrics to measure the richness of characters and coreference in stories.
arXiv Detail & Related papers (2024-09-20T14:56:33Z) - MoPS: Modular Story Premise Synthesis for Open-Ended Automatic Story Generation [50.01780173691132]
We introduce Modular Story Premise Synthesis (MoPS)
MoPS breaks down story premises into modules like background and persona for automated design and generation.
Thorough evaluations demonstrate that our synthesized premises excel in diversity, fascination, completeness, and originality.
arXiv Detail & Related papers (2024-06-09T08:31:14Z) - SARD: A Human-AI Collaborative Story Generation [0.0]
We propose SARD, a drag-and-drop visual interface for generating a multi-chapter story using large language models.
Our evaluation of the usability of SARD and its creativity support shows that while node-based visualization of the narrative may help writers build a mental model, it exerts unnecessary mental overhead to the writer.
We also found that AI generates stories that are less lexically diverse, irrespective of the complexity of the story.
arXiv Detail & Related papers (2024-03-03T17:48:42Z) - Intelligent Grimm -- Open-ended Visual Storytelling via Latent Diffusion
Models [70.86603627188519]
We focus on a novel, yet challenging task of generating a coherent image sequence based on a given storyline, denoted as open-ended visual storytelling.
We propose a learning-based auto-regressive image generation model, termed as StoryGen, with a novel vision-language context module.
We show StoryGen can generalize to unseen characters without any optimization, and generate image sequences with coherent content and consistent character.
arXiv Detail & Related papers (2023-06-01T17:58:50Z) - Album Storytelling with Iterative Story-aware Captioning and Large
Language Models [86.6548090965982]
We study how to transform an album to vivid and coherent stories, a task we refer to as "album storytelling"
With recent advances in Large Language Models (LLMs), it is now possible to generate lengthy, coherent text.
Our method effectively generates more accurate and engaging stories for albums, with enhanced coherence and vividness.
arXiv Detail & Related papers (2023-05-22T11:45:10Z) - Conveying the Predicted Future to Users: A Case Study of Story Plot
Prediction [14.036772394560238]
We create a system that produces a short description that narrates a predicted plot.
Our goal is to assist writers in crafting a consistent and compelling story arc.
arXiv Detail & Related papers (2023-02-17T20:10:55Z) - Inferring the Reader: Guiding Automated Story Generation with
Commonsense Reasoning [12.264880519328353]
We introduce Commonsense-inference Augmented neural StoryTelling (CAST), a framework for introducing commonsense reasoning into the generation process.
We find that our CAST method produces significantly more coherent, on-topic, enjoyable and fluent stories than existing models in both the single-character and two-character settings.
arXiv Detail & Related papers (2021-05-04T06:40:33Z) - Collaborative Storytelling with Large-scale Neural Language Models [6.0794985566317425]
We introduce the task of collaborative storytelling, where an artificial intelligence agent and a person collaborate to create a unique story by taking turns adding to it.
We present a collaborative storytelling system which works with a human storyteller to create a story by generating new utterances based on the story so far.
arXiv Detail & Related papers (2020-11-20T04:36:54Z) - Cue Me In: Content-Inducing Approaches to Interactive Story Generation [74.09575609958743]
We focus on the task of interactive story generation, where the user provides the model mid-level sentence abstractions.
We present two content-inducing approaches to effectively incorporate this additional information.
Experimental results from both automatic and human evaluations show that these methods produce more topically coherent and personalized stories.
arXiv Detail & Related papers (2020-10-20T00:36:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.