Great Expectations: Unsupervised Inference of Suspense, Surprise and
Salience in Storytelling
- URL: http://arxiv.org/abs/2206.09708v1
- Date: Mon, 20 Jun 2022 11:00:23 GMT
- Title: Great Expectations: Unsupervised Inference of Suspense, Surprise and
Salience in Storytelling
- Authors: David Wilmot
- Abstract summary: The thesis trains a series of deep learning models via only reading stories, a self-supervised (or unsupervised) system.
Narrative theory methods are applied to the knowledge built into deep learning models to directly infer salience, surprise, and salience in stories.
- Score: 3.42658286826597
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Stories interest us not because they are a sequence of mundane and
predictable events but because they have drama and tension. Crucial to creating
dramatic and exciting stories are surprise and suspense. The thesis trains a
series of deep learning models via only reading stories, a self-supervised (or
unsupervised) system. Narrative theory methods (rules and procedures) are
applied to the knowledge built into deep learning models to directly infer
salience, surprise, and salience in stories. Extensions add memory and external
knowledge from story plots and from Wikipedia to infer salience on novels such
as Great Expectations and plays such as Macbeth. Other work adapts the models
as a planning system for generating original stories.
The thesis finds that applying the narrative theory to deep learning models
can align with the typical reader. In follow-up work, the insights could help
improve computer models for tasks such as automatic story writing and
assistance for writing, summarising or editing stories. Moreover, the approach
of applying narrative theory to the inherent qualities built in a system that
learns itself (self-supervised) from reading from books, watching videos, and
listening to audio is much cheaper and more adaptable to other domains and
tasks. Progress is swift in improving self-supervised systems. As such, the
thesis's relevance is that applying domain expertise with these systems may be
a more productive approach for applying machine learning in many areas of
interest.
Related papers
- Agents' Room: Narrative Generation through Multi-step Collaboration [54.98886593802834]
We propose a generation framework inspired by narrative theory that decomposes narrative writing into subtasks tackled by specialized agents.
We show that Agents' Room generates stories preferred by expert evaluators over those produced by baseline systems.
arXiv Detail & Related papers (2024-10-03T15:44:42Z) - Generating Visual Stories with Grounded and Coreferent Characters [63.07511918366848]
We present the first model capable of predicting visual stories with consistently grounded and coreferent character mentions.
Our model is finetuned on a new dataset which we build on top of the widely used VIST benchmark.
We also propose new evaluation metrics to measure the richness of characters and coreference in stories.
arXiv Detail & Related papers (2024-09-20T14:56:33Z) - Are Large Language Models Capable of Generating Human-Level Narratives? [114.34140090869175]
This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
We introduce a novel computational framework to analyze narratives through three discourse-level aspects.
We show that explicit integration of discourse features can enhance storytelling, as is demonstrated by over 40% improvement in neural storytelling.
arXiv Detail & Related papers (2024-07-18T08:02:49Z) - Creating Suspenseful Stories: Iterative Planning with Large Language
Models [2.6923151107804055]
We propose a novel iterative-prompting-based planning method that is grounded in two theoretical foundations of story suspense.
To the best of our knowledge, this paper is the first attempt at suspenseful story generation with large language models.
arXiv Detail & Related papers (2024-02-27T01:25:52Z) - Acting as Inverse Inverse Planning [19.267798639508946]
We offer a novel computational framework for such tools.
To simulate the audience, we borrow an established principle from cognitive science.
We treat storytelling as "*inverse* inverse planning," the task of choosing actions to manipulate an inverse planner's inferences.
arXiv Detail & Related papers (2023-05-26T13:26:36Z) - Neural Story Planning [8.600049807193413]
We present an approach to story plot generation that unifies causal planning with neural language models.
Our system infers the preconditions for events in the story and then events that will cause those conditions to become true.
Results indicate that our proposed method produces more coherent plotlines than several strong baselines.
arXiv Detail & Related papers (2022-12-16T21:29:41Z) - A Corpus for Understanding and Generating Moral Stories [84.62366141696901]
We propose two understanding tasks and two generation tasks to assess these abilities of machines.
We present STORAL, a new dataset of Chinese and English human-written moral stories.
arXiv Detail & Related papers (2022-04-20T13:12:36Z) - Computational Lens on Cognition: Study Of Autobiographical Versus
Imagined Stories With Large-Scale Language Models [95.88620740809004]
We study differences in the narrative flow of events in autobiographical versus imagined stories using GPT-3.
We found that imagined stories have higher sequentiality than autobiographical stories.
In comparison to imagined stories, autobiographical stories contain more concrete words and words related to the first person.
arXiv Detail & Related papers (2022-01-07T20:10:47Z) - Guiding Neural Story Generation with Reader Models [5.935317028008691]
We introduce Story generation with Reader Models (StoRM), a framework in which a reader model is used to reason about the story should progress.
Experiments show that our model produces significantly more coherent and on-topic stories, outperforming baselines in dimensions including plot plausibility and staying on topic.
arXiv Detail & Related papers (2021-12-16T03:44:01Z) - A guided journey through non-interactive automatic story generation [0.0]
The article presents requirements for creative systems, three types of models of creativity (computational, socio-cultural, and individual), and models of human creative writing.
The article concludes that the autonomous generation and adoption of the main idea to be conveyed and the autonomous design of the creativity ensuring criteria are possibly two of most important topics for future research.
arXiv Detail & Related papers (2021-10-08T10:01:36Z) - A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation [98.25464306634758]
We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories.
We employ multi-task learning which combines a discriminative objective to distinguish true and fake stories.
Our model can generate more reasonable stories than state-of-the-art baselines, particularly in terms of logic and global coherence.
arXiv Detail & Related papers (2020-01-15T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.