GPT-4 Generated Narratives of Life Events using a Structured Narrative Prompt: A Validation Study
- URL: http://arxiv.org/abs/2402.05435v2
- Date: Fri, 12 Jul 2024 13:46:47 GMT
- Title: GPT-4 Generated Narratives of Life Events using a Structured Narrative Prompt: A Validation Study
- Authors: Christopher J. Lynch, Erik Jensen, Madison H. Munro, Virginia Zamponi, Joseph Martinez, Kevin O'Brien, Brandon Feldhaus, Katherine Smith, Ann Marie Reinhold, Ross Gore,
- Abstract summary: We employ a zero-shot structured narrative prompt to generate 24,000 narratives using OpenAI's GPT-4.
From this dataset, we manually classify 2,880 narratives and evaluate their validity in conveying birth, death, hiring, and firing events.
We extend our analysis to predict the classifications of the remaining 21,120 narratives.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) play a pivotal role in generating vast arrays of narratives, facilitating a systematic exploration of their effectiveness for communicating life events in narrative form. In this study, we employ a zero-shot structured narrative prompt to generate 24,000 narratives using OpenAI's GPT-4. From this dataset, we manually classify 2,880 narratives and evaluate their validity in conveying birth, death, hiring, and firing events. Remarkably, 87.43% of the narratives sufficiently convey the intention of the structured prompt. To automate the identification of valid and invalid narratives, we train and validate nine Machine Learning models on the classified datasets. Leveraging these models, we extend our analysis to predict the classifications of the remaining 21,120 narratives. All the ML models excelled at classifying valid narratives as valid, but experienced challenges at simultaneously classifying invalid narratives as invalid. Our findings not only advance the study of LLM capabilities, limitations, and validity but also offer practical insights for narrative generation and natural language processing applications.
Related papers
- Identifying economic narratives in large text corpora -- An integrated approach using Large Language Models [0.4649452333875421]
We evaluate the benefits of Large Language Models (LLMs) for extracting economic narratives from texts.<n>We apply a rigorous narrative definition and compare GPT-4o outputs to gold-standard narratives produced by expert annotators.<n>Our results suggest GPT-4o is capable of extracting valid economic narratives in a structured format, but still falls short of expert-level performance when handling complex documents and narratives.
arXiv Detail & Related papers (2025-06-18T01:00:59Z) - Explingo: Explaining AI Predictions using Large Language Models [47.21393184176602]
Large Language Models (LLMs) can transform explanations into human-readable, narrative formats that align with natural communication.
The Narrator takes in ML explanations and transforms them into natural-language descriptions.
The Grader scores these narratives on a set of metrics including accuracy, completeness, fluency, and conciseness.
The findings from this work have been integrated into an open-source tool that makes narrative explanations available for further applications.
arXiv Detail & Related papers (2024-12-06T16:01:30Z) - MLD-EA: Check and Complete Narrative Coherence by Introducing Emotions and Actions [8.06073345741722]
We introduce the Missing Logic Detector by Emotion and Action (MLD-EA) model.
It identifies narrative gaps and generates coherent sentences that integrate seamlessly with the story's emotional and logical flow.
This work fills a gap in NLP research and advances border goals of creating more sophisticated and reliable story-generation systems.
arXiv Detail & Related papers (2024-12-03T23:01:21Z) - Failure Modes of LLMs for Causal Reasoning on Narratives [51.19592551510628]
We investigate the causal reasoning abilities of large language models (LLMs) through the representative problem of inferring causal relationships from narratives.
We find that even state-of-the-art language models rely on unreliable shortcuts, both in terms of the narrative presentation and their parametric knowledge.
arXiv Detail & Related papers (2024-10-31T12:48:58Z) - Causal Micro-Narratives [62.47217054314046]
We present a novel approach to classify causal micro-narratives from text.
These narratives are sentence-level explanations of the cause(s) and/or effect(s) of a target subject.
arXiv Detail & Related papers (2024-10-07T17:55:10Z) - Mapping News Narratives Using LLMs and Narrative-Structured Text Embeddings [0.0]
We introduce a numerical narrative representation grounded in structuralist linguistic theory.
We extract the actants using an open-source LLM and integrate them into a Narrative-Structured Text Embedding.
We demonstrate the analytical insights of the method on the example of 5000 full-text news articles from Al Jazeera and The Washington Post on the Israel-Palestine conflict.
arXiv Detail & Related papers (2024-09-10T14:15:30Z) - Are Large Language Models Capable of Generating Human-Level Narratives? [114.34140090869175]
This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
We introduce a novel computational framework to analyze narratives through three discourse-level aspects.
We show that explicit integration of discourse features can enhance storytelling, as is demonstrated by over 40% improvement in neural storytelling.
arXiv Detail & Related papers (2024-07-18T08:02:49Z) - Large-scale study of human memory for meaningful narratives [0.0]
We develop a pipeline that uses large language models (LLMs) to design naturalistic narrative stimuli for large-scale recall and recognition memory experiments.
We performed online memory experiments with a large number of participants and collected recognition and recall data for narratives of different sizes.
arXiv Detail & Related papers (2023-11-08T15:11:57Z) - Prompt, Condition, and Generate: Classification of Unsupported Claims
with In-Context Learning [5.893124686141782]
We focus on fine-grained debate topics and formulate a new task of distilling a countable set of narratives.
We present a crowdsourced dataset of 12 controversial topics, comprising more than 120k arguments, claims, and comments from heterogeneous sources, each annotated with a narrative label.
We find that generated claims with supported evidence can be used to improve the performance of narrative classification models.
arXiv Detail & Related papers (2023-09-19T06:42:37Z) - Text Classification via Large Language Models [63.1874290788797]
We introduce Clue And Reasoning Prompting (CARP) to address complex linguistic phenomena involved in text classification.
Remarkably, CARP yields new SOTA performances on 4 out of 5 widely-used text-classification benchmarks.
More importantly, we find that CARP delivers impressive abilities on low-resource and domain-adaptation setups.
arXiv Detail & Related papers (2023-05-15T06:24:45Z) - Paragraph-level Commonsense Transformers with Recurrent Memory [77.4133779538797]
We train a discourse-aware model that incorporates paragraph-level information to generate coherent commonsense inferences from narratives.
Our results show that PARA-COMET outperforms the sentence-level baselines, particularly in generating inferences that are both coherent and novel.
arXiv Detail & Related papers (2020-10-04T05:24:12Z) - Exploring aspects of similarity between spoken personal narratives by
disentangling them into narrative clause types [13.350982138577038]
We introduce a corpus of real-world spoken personal narratives comprising 10,296 narrative clauses from 594 video transcripts.
Second, we ask non-narrative experts to annotate those clauses under Labov's sociolinguistic model of personal narratives.
Third, we train a classifier that reaches 84.7% F-score for the highest-agreed clauses.
Our approach is intended to help inform machine learning methods aimed at studying or representing personal narratives.
arXiv Detail & Related papers (2020-05-26T14:34:07Z) - Temporal Embeddings and Transformer Models for Narrative Text
Understanding [72.88083067388155]
We present two approaches to narrative text understanding for character relationship modelling.
The temporal evolution of these relations is described by dynamic word embeddings, that are designed to learn semantic changes over time.
A supervised learning approach based on the state-of-the-art transformer model BERT is used instead to detect static relations between characters.
arXiv Detail & Related papers (2020-03-19T14:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.