Affective and Dynamic Beam Search for Story Generation
- URL: http://arxiv.org/abs/2310.15079v1
- Date: Mon, 23 Oct 2023 16:37:14 GMT
- Title: Affective and Dynamic Beam Search for Story Generation
- Authors: Tenghao Huang, Ehsan Qasemi, Bangzheng Li, He Wang, Faeze Brahman,
Muhao Chen, Snigdha Chaturvedi
- Abstract summary: We propose Affective Story Generator (AffGen) for generating interesting narratives.
AffGen employs two novel techniques-Dynamic Beam Sizing and Affective Reranking.
- Score: 50.3130767805383
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Storytelling's captivating potential makes it a fascinating research area,
with implications for entertainment, education, therapy, and cognitive studies.
In this paper, we propose Affective Story Generator (AffGen) for generating
interesting narratives. AffGen introduces "intriguing twists" in narratives by
employing two novel techniques-Dynamic Beam Sizing and Affective Reranking.
Dynamic Beam Sizing encourages less predictable, more captivating word choices
using a contextual multi-arm bandit model. Affective Reranking prioritizes
sentence candidates based on affect intensity. Our empirical evaluations, both
automatic and human, demonstrate AffGen's superior performance over existing
baselines in generating affectively charged and interesting narratives. Our
ablation study and analysis provide insights into the strengths and weaknesses
of AffGen.
Related papers
- Generating Visual Stories with Grounded and Coreferent Characters [63.07511918366848]
We present the first model capable of predicting visual stories with consistently grounded and coreferent character mentions.
Our model is finetuned on a new dataset which we build on top of the widely used VIST benchmark.
We also propose new evaluation metrics to measure the richness of characters and coreference in stories.
arXiv Detail & Related papers (2024-09-20T14:56:33Z) - Predicting Affective States from Screen Text Sentiment [11.375704805270171]
The potential of analysing the textual content viewed on smartphones to predict affective states remains underexplored.
We employed linear regression, zero-shot, and multi-shot prompting to analyse relationships between screen text and affective states.
Our findings indicate that multi-shot prompting substantially outperforms both linear regression and zero-shot prompting.
arXiv Detail & Related papers (2024-08-23T05:25:11Z) - Are Large Language Models Capable of Generating Human-Level Narratives? [114.34140090869175]
This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
We introduce a novel computational framework to analyze narratives through three discourse-level aspects.
We show that explicit integration of discourse features can enhance storytelling, as is demonstrated by over 40% improvement in neural storytelling.
arXiv Detail & Related papers (2024-07-18T08:02:49Z) - Modeling Emotional Trajectories in Written Stories Utilizing Transformers and Weakly-Supervised Learning [47.02027575768659]
We introduce continuous valence and arousal labels for an existing dataset of children's stories originally annotated with discrete emotion categories.
For predicting the thus obtained emotionality signals, we fine-tune a DeBERTa model and improve upon this baseline via a weakly supervised learning approach.
A detailed analysis shows the extent to which the results vary depending on factors such as the author, the individual story, or the section within the story.
arXiv Detail & Related papers (2024-06-04T12:17:16Z) - SciMON: Scientific Inspiration Machines Optimized for Novelty [68.46036589035539]
We explore and enhance the ability of neural language models to generate novel scientific directions grounded in literature.
We take a dramatic departure with a novel setting in which models use as input background contexts.
We present SciMON, a modeling framework that uses retrieval of "inspirations" from past scientific papers.
arXiv Detail & Related papers (2023-05-23T17:12:08Z) - DeltaScore: Fine-Grained Story Evaluation with Perturbations [69.33536214124878]
We introduce DELTASCORE, a novel methodology that employs perturbation techniques for the evaluation of nuanced story aspects.
Our central proposition posits that the extent to which a story excels in a specific aspect (e.g., fluency) correlates with the magnitude of its susceptibility to particular perturbations.
We measure the quality of an aspect by calculating the likelihood difference between pre- and post-perturbation states using pre-trained language models.
arXiv Detail & Related papers (2023-03-15T23:45:54Z) - Stylized Story Generation with Style-Guided Planning [38.791298336259146]
We propose a new task, stylized story gen-eration, namely generating stories with speci-fied style given a leading context.
Our model can controllably generateemo-tion-driven or event-driven stories based on the ROCStories dataset.
arXiv Detail & Related papers (2021-05-18T15:55:38Z) - Adapting a Language Model for Controlled Affective Text Generation [2.9267797650223653]
We adapt the state-of-the-art language generation models to generate affective (emotional) text.
We propose to incorporate emotion as prior for the probabilistic state-of-the-art text generation model such as GPT-2.
The model gives a user the flexibility to control the category and intensity of emotion as well as the topic of the generated text.
arXiv Detail & Related papers (2020-11-08T15:24:39Z) - Noisy Agents: Self-supervised Exploration by Predicting Auditory Events [127.82594819117753]
We propose a novel type of intrinsic motivation for Reinforcement Learning (RL) that encourages the agent to understand the causal effect of its actions.
We train a neural network to predict the auditory events and use the prediction errors as intrinsic rewards to guide RL exploration.
Experimental results on Atari games show that our new intrinsic motivation significantly outperforms several state-of-the-art baselines.
arXiv Detail & Related papers (2020-07-27T17:59:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.