Structuralist Approach to AI Literary Criticism: Leveraging Greimas Semiotic Square for Large Language Models
- URL: http://arxiv.org/abs/2506.21360v1
- Date: Thu, 26 Jun 2025 15:10:24 GMT
- Title: Structuralist Approach to AI Literary Criticism: Leveraging Greimas Semiotic Square for Large Language Models
- Authors: Fangzhou Dong, Yifan Zeng, Yingpeng Sang, Hong Shen,
- Abstract summary: GLASS (Greimas Literary Analysis via Semiotic Square) is a structured analytical framework based on Greimas Semiotic Square (GSS)<n> GLASS facilitates the rapid dissection of narrative structures and deep meanings in narrative works.<n>This research provides an AI-based tool for literary research and education, offering insights into the cognitive mechanisms underlying literary engagement.
- Score: 2.7323591332394166
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large Language Models (LLMs) excel in understanding and generating text but struggle with providing professional literary criticism for works with profound thoughts and complex narratives. This paper proposes GLASS (Greimas Literary Analysis via Semiotic Square), a structured analytical framework based on Greimas Semiotic Square (GSS), to enhance LLMs' ability to conduct in-depth literary analysis. GLASS facilitates the rapid dissection of narrative structures and deep meanings in narrative works. We propose the first dataset for GSS-based literary criticism, featuring detailed analyses of 48 works. Then we propose quantitative metrics for GSS-based literary criticism using the LLM-as-a-judge paradigm. Our framework's results, compared with expert criticism across multiple works and LLMs, show high performance. Finally, we applied GLASS to 39 classic works, producing original and high-quality analyses that address existing research gaps. This research provides an AI-based tool for literary research and education, offering insights into the cognitive mechanisms underlying literary engagement.
Related papers
- Literary Evidence Retrieval via Long-Context Language Models [39.174955595897366]
How well do modern long-context language models understand literary fiction?<n>We build a benchmark where the entire text of a primary source is provided to an LLM alongside literary criticism with a missing quotation from that work.<n>This setting mirrors the human process of literary analysis by requiring models to perform both global narrative reasoning and close textual examination.
arXiv Detail & Related papers (2025-06-03T17:19:45Z) - Large Language Models for Automated Literature Review: An Evaluation of Reference Generation, Abstract Writing, and Review Composition [2.048226951354646]
Large language models (LLMs) have emerged as a potential solution to automate the complex processes involved in writing literature reviews.<n>This study introduces a framework to automatically evaluate the performance of LLMs in three key tasks of literature writing.
arXiv Detail & Related papers (2024-12-18T08:42:25Z) - Hierarchical Narrative Analysis: Unraveling Perceptions of Generative AI [1.1874952582465599]
We propose a method that leverages large language models (LLMs) to extract and organize these structures into a hierarchical framework.
We validate this approach by analyzing public opinions on generative AI collected by Japan's Agency for Cultural Affairs.
Our analysis provides clearer visualization of the factors influencing divergent opinions on generative AI, offering deeper insights into the structures of agreement and disagreement.
arXiv Detail & Related papers (2024-09-17T09:56:12Z) - Inclusivity in Large Language Models: Personality Traits and Gender Bias in Scientific Abstracts [49.97673761305336]
We evaluate three large language models (LLMs) for their alignment with human narrative styles and potential gender biases.
Our findings indicate that, while these models generally produce text closely resembling human authored content, variations in stylistic features suggest significant gender biases.
arXiv Detail & Related papers (2024-06-27T19:26:11Z) - Categorical Syllogisms Revisited: A Review of the Logical Reasoning Abilities of LLMs for Analyzing Categorical Syllogism [62.571419297164645]
This paper provides a systematic overview of prior works on the logical reasoning ability of large language models for analyzing categorical syllogisms.<n>We first investigate all the possible variations for the categorical syllogisms from a purely logical perspective.<n>We then examine the underlying configurations (i.e., mood and figure) tested by the existing datasets.
arXiv Detail & Related papers (2024-06-26T21:17:20Z) - LFED: A Literary Fiction Evaluation Dataset for Large Language Models [58.85989777743013]
We collect 95 literary fictions that are either originally written in Chinese or translated into Chinese, covering a wide range of topics across several centuries.
We define a question taxonomy with 8 question categories to guide the creation of 1,304 questions.
We conduct an in-depth analysis to ascertain how specific attributes of literary fictions (e.g., novel types, character numbers, the year of publication) impact LLM performance in evaluations.
arXiv Detail & Related papers (2024-05-16T15:02:24Z) - ChatCite: LLM Agent with Human Workflow Guidance for Comparative
Literature Summary [30.409552944905915]
ChatCite is an LLM agent with human workflow guidance for comparative literature summary.
The ChatCite agent outperformed other models in various dimensions in the experiments.
The literature summaries generated by ChatCite can also be directly used for drafting literature reviews.
arXiv Detail & Related papers (2024-03-05T01:13:56Z) - Tapping the Potential of Large Language Models as Recommender Systems: A Comprehensive Framework and Empirical Analysis [91.5632751731927]
Large Language Models such as ChatGPT have showcased remarkable abilities in solving general tasks.<n>We propose a general framework for utilizing LLMs in recommendation tasks, focusing on the capabilities of LLMs as recommenders.<n>We analyze the impact of public availability, tuning strategies, model architecture, parameter scale, and context length on recommendation results.
arXiv Detail & Related papers (2024-01-10T08:28:56Z) - Sentiment Analysis through LLM Negotiations [58.67939611291001]
A standard paradigm for sentiment analysis is to rely on a singular LLM and makes the decision in a single round.
This paper introduces a multi-LLM negotiation framework for sentiment analysis.
arXiv Detail & Related papers (2023-11-03T12:35:29Z) - Sentiment Analysis in the Era of Large Language Models: A Reality Check [69.97942065617664]
This paper investigates the capabilities of large language models (LLMs) in performing various sentiment analysis tasks.
We evaluate performance across 13 tasks on 26 datasets and compare the results against small language models (SLMs) trained on domain-specific datasets.
arXiv Detail & Related papers (2023-05-24T10:45:25Z) - RELIC: Retrieving Evidence for Literary Claims [29.762552250403544]
We use a large-scale dataset of 78K literary quotations to formulate the novel task of literary evidence retrieval.
We implement a RoBERTa-based dense passage retriever for this task that outperforms existing pretrained information retrieval baselines.
arXiv Detail & Related papers (2022-03-18T16:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.