Are Large Language Models Capable of Generating Human-Level Narratives?
- URL: http://arxiv.org/abs/2407.13248v2
- Date: Fri, 4 Oct 2024 18:31:58 GMT
- Title: Are Large Language Models Capable of Generating Human-Level Narratives?
- Authors: Yufei Tian, Tenghao Huang, Miri Liu, Derek Jiang, Alexander Spangher, Muhao Chen, Jonathan May, Nanyun Peng,
- Abstract summary: This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
We introduce a novel computational framework to analyze narratives through three discourse-level aspects.
We show that explicit integration of discourse features can enhance storytelling, as is demonstrated by over 40% improvement in neural storytelling.
- Score: 114.34140090869175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression. We introduce a novel computational framework to analyze narratives through three discourse-level aspects: i) story arcs, ii) turning points, and iii) affective dimensions, including arousal and valence. By leveraging expert and automatic annotations, we uncover significant discrepancies between the LLM- and human- written stories. While human-written stories are suspenseful, arousing, and diverse in narrative structures, LLM stories are homogeneously positive and lack tension. Next, we measure narrative reasoning skills as a precursor to generative capacities, concluding that most LLMs fall short of human abilities in discourse understanding. Finally, we show that explicit integration of aforementioned discourse features can enhance storytelling, as is demonstrated by over 40% improvement in neural storytelling in terms of diversity, suspense, and arousal.
Related papers
- Explingo: Explaining AI Predictions using Large Language Models [47.21393184176602]
Large Language Models (LLMs) can transform explanations into human-readable, narrative formats that align with natural communication.
The Narrator takes in ML explanations and transforms them into natural-language descriptions.
The Grader scores these narratives on a set of metrics including accuracy, completeness, fluency, and conciseness.
The findings from this work have been integrated into an open-source tool that makes narrative explanations available for further applications.
arXiv Detail & Related papers (2024-12-06T16:01:30Z) - Mapping News Narratives Using LLMs and Narrative-Structured Text Embeddings [0.0]
We introduce a numerical narrative representation grounded in structuralist linguistic theory.
We extract the actants using an open-source LLM and integrate them into a Narrative-Structured Text Embedding.
We demonstrate the analytical insights of the method on the example of 5000 full-text news articles from Al Jazeera and The Washington Post on the Israel-Palestine conflict.
arXiv Detail & Related papers (2024-09-10T14:15:30Z) - Measuring Psychological Depth in Language Models [50.48914935872879]
We introduce the Psychological Depth Scale (PDS), a novel framework rooted in literary theory that measures an LLM's ability to produce authentic and narratively complex stories.
We empirically validate our framework by showing that humans can consistently evaluate stories based on PDS (0.72 Krippendorff's alpha)
Surprisingly, GPT-4 stories either surpassed or were statistically indistinguishable from highly-rated human-written stories sourced from Reddit.
arXiv Detail & Related papers (2024-06-18T14:51:54Z) - HEART-felt Narratives: Tracing Empathy and Narrative Style in Personal Stories with LLMs [30.636456219922906]
Empathy serves as a cornerstone in enabling prosocial behaviors, and can be evoked through sharing of personal experiences in stories.
While empathy is influenced by narrative content, intuitively, people respond to the way a story is told as well, through narrative style.
We empirically examine and quantify this relationship between style and empathy using LLMs and large-scale crowdsourcing studies.
arXiv Detail & Related papers (2024-05-27T20:00:38Z) - Creating Suspenseful Stories: Iterative Planning with Large Language
Models [2.6923151107804055]
We propose a novel iterative-prompting-based planning method that is grounded in two theoretical foundations of story suspense.
To the best of our knowledge, this paper is the first attempt at suspenseful story generation with large language models.
arXiv Detail & Related papers (2024-02-27T01:25:52Z) - Are NLP Models Good at Tracing Thoughts: An Overview of Narrative
Understanding [21.900015612952146]
Narrative understanding involves capturing the author's cognitive processes, providing insights into their knowledge, intentions, beliefs, and desires.
Although large language models (LLMs) excel in generating grammatically coherent text, their ability to comprehend the author's thoughts remains uncertain.
This hinders the practical applications of narrative understanding.
arXiv Detail & Related papers (2023-10-28T18:47:57Z) - The Next Chapter: A Study of Large Language Models in Storytelling [51.338324023617034]
The application of prompt-based learning with large language models (LLMs) has exhibited remarkable performance in diverse natural language processing (NLP) tasks.
This paper conducts a comprehensive investigation, utilizing both automatic and human evaluation, to compare the story generation capacity of LLMs with recent models.
The results demonstrate that LLMs generate stories of significantly higher quality compared to other story generation models.
arXiv Detail & Related papers (2023-01-24T02:44:02Z) - Computational Lens on Cognition: Study Of Autobiographical Versus
Imagined Stories With Large-Scale Language Models [95.88620740809004]
We study differences in the narrative flow of events in autobiographical versus imagined stories using GPT-3.
We found that imagined stories have higher sequentiality than autobiographical stories.
In comparison to imagined stories, autobiographical stories contain more concrete words and words related to the first person.
arXiv Detail & Related papers (2022-01-07T20:10:47Z) - Paragraph-level Commonsense Transformers with Recurrent Memory [77.4133779538797]
We train a discourse-aware model that incorporates paragraph-level information to generate coherent commonsense inferences from narratives.
Our results show that PARA-COMET outperforms the sentence-level baselines, particularly in generating inferences that are both coherent and novel.
arXiv Detail & Related papers (2020-10-04T05:24:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.