Do readers prefer AI-generated Italian short stories?
- URL: http://arxiv.org/abs/2601.17363v1
- Date: Sat, 24 Jan 2026 08:15:13 GMT
- Title: Do readers prefer AI-generated Italian short stories?
- Authors: Michael Farrell,
- Abstract summary: This study investigates whether readers prefer AI-generated short stories in Italian over one written by a renowned Italian author.<n>In a blind setup, 20 participants read and evaluated three stories, two created with ChatGPT-4o and one by Alberto Moravia.<n>The results showed that the AI-written texts received slightly higher average ratings and were more frequently preferred, although differences were modest.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This study investigates whether readers prefer AI-generated short stories in Italian over one written by a renowned Italian author. In a blind setup, 20 participants read and evaluated three stories, two created with ChatGPT-4o and one by Alberto Moravia, without being informed of their origin. To explore potential influencing factors, reading habits and demographic data, comprising age, gender, education and first language, were also collected. The results showed that the AI-written texts received slightly higher average ratings and were more frequently preferred, although differences were modest. No statistically significant associations were found between text preference and demographic or reading-habit variables. These findings challenge assumptions about reader preference for human-authored fiction and raise questions about the necessity of synthetic-text editing in literary contexts.
Related papers
- Can professional translators identify machine-generated text? [0.0]
This study investigates whether professional translators can reliably identify short stories generated in Italian by artificial intelligence (AI) without prior specialized training.<n>Sixty-nine translators took part in an in-person experiment, where they assessed three anonymized short stories.<n>Low burstiness and narrative contradiction emerged as the most reliable indicators of synthetic authorship.
arXiv Detail & Related papers (2026-01-22T10:25:52Z) - The Reader is the Metric: How Textual Features and Reader Profiles Explain Conflicting Evaluations of AI Creative Writing [1.3654846342364306]
We use five public datasets (1,471 stories, 101 annotators including critics, students, and lay readers) to extract 17 reference-less textual features.<n>We model individual reader preferences, deriving feature importance vectors that reflect their textual priorities.<n>Our results quantitatively explain how measurements of literary quality are a function of how text features align with each reader's preferences.
arXiv Detail & Related papers (2025-06-03T18:50:22Z) - Almost AI, Almost Human: The Challenge of Detecting AI-Polished Writing [55.2480439325792]
This study systematically evaluations twelve state-of-the-art AI-text detectors using our AI-Polished-Text Evaluation dataset.<n>Our findings reveal that detectors frequently flag even minimally polished text as AI-generated, struggle to differentiate between degrees of AI involvement, and exhibit biases against older and smaller models.
arXiv Detail & Related papers (2025-02-21T18:45:37Z) - Is Human-Like Text Liked by Humans? Multilingual Human Detection and Preference Against AI [95.81924314159943]
We find that major gaps between human and machine text lie in concreteness, cultural nuances, and diversity.<n>We also find that humans do not always prefer human-written text, particularly when they cannot clearly identify its source.
arXiv Detail & Related papers (2025-02-17T09:56:46Z) - Group-Adaptive Threshold Optimization for Robust AI-Generated Text Detection [58.419940585826744]
We introduce FairOPT, an algorithm for group-specific threshold optimization for probabilistic AI-text detectors.<n>We partitioned data into subgroups based on attributes (e.g., text length and writing style) and implemented FairOPT to learn decision thresholds for each group to reduce discrepancy.<n>Our framework paves the way for more robust classification in AI-generated content detection via post-processing.
arXiv Detail & Related papers (2025-02-06T21:58:48Z) - "It was 80% me, 20% AI": Seeking Authenticity in Co-Writing with Large Language Models [97.22914355737676]
We examine whether and how writers want to preserve their authentic voice when co-writing with AI tools.
Our findings illuminate conceptions of authenticity in human-AI co-creation.
Readers' responses showed less concern about human-AI co-writing.
arXiv Detail & Related papers (2024-11-20T04:42:32Z) - Are Large Language Models Capable of Generating Human-Level Narratives? [114.34140090869175]
This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
We introduce a novel computational framework to analyze narratives through three discourse-level aspects.
We show that explicit integration of discourse features can enhance storytelling, as is demonstrated by over 40% improvement in neural storytelling.
arXiv Detail & Related papers (2024-07-18T08:02:49Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [51.26815896167173]
We present a comprehensive tertiary analysis of PAMI reviews along three complementary dimensions.<n>Our analyses reveal distinctive organizational patterns as well as persistent gaps in current review practices.<n>Finally, our evaluation of state-of-the-art AI-generated reviews indicates encouraging advances in coherence and organization.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Experimental Narratives: A Comparison of Human Crowdsourced Storytelling and AI Storytelling [0.0]
The study analyzes 250 stories authored by crowdworkers in June 2019 and 80 stories generated by GPT-3.5 and GPT-4.
Both crowdworkers and large language models responded to identical prompts about creating and falling in love with an artificial human.
The analysis reveals that narratives from GPT-3.5 and particularly GPT-4 are more progressive in terms of gender roles and sexuality than those written by humans.
arXiv Detail & Related papers (2023-10-19T16:54:38Z) - An Analysis of Reader Engagement in Literary Fiction through Eye
Tracking and Linguistic Features [11.805980147608178]
We analyzed the significance of various qualities of the text in predicting how engaging a reader is likely to find it.
Furthering our understanding of what captivates readers in fiction will help better inform models used in creative narrative generation.
arXiv Detail & Related papers (2023-06-06T22:14:59Z) - Computational Lens on Cognition: Study Of Autobiographical Versus
Imagined Stories With Large-Scale Language Models [95.88620740809004]
We study differences in the narrative flow of events in autobiographical versus imagined stories using GPT-3.
We found that imagined stories have higher sequentiality than autobiographical stories.
In comparison to imagined stories, autobiographical stories contain more concrete words and words related to the first person.
arXiv Detail & Related papers (2022-01-07T20:10:47Z) - Results of a Single Blind Literary Taste Test with Short Anonymized
Novel Fragments [4.695687634290403]
It is an open question to what extent perceptions of literary quality are derived from text-intrinsic versus social factors.
We report the results of a pilot study to gauge the effect of textual features on literary ratings of Dutch-language novels.
We find moderate to strong correlations of questionnaire ratings with the survey ratings, but the predictions are closer to the survey ratings.
arXiv Detail & Related papers (2020-11-03T11:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.