Hierarchical Narrative Analysis: Unraveling Perceptions of Generative AI
- URL: http://arxiv.org/abs/2409.11032v3
- Date: Mon, 11 Nov 2024 12:50:44 GMT
- Title: Hierarchical Narrative Analysis: Unraveling Perceptions of Generative AI
- Authors: Riona Matsuoka, Hiroki Matsumoto, Takahiro Yoshida, Tomohiro Watanabe, Ryoma Kondo, Ryohei Hisano,
- Abstract summary: We propose a method that leverages large language models (LLMs) to extract and organize these structures into a hierarchical framework.
We validate this approach by analyzing public opinions on generative AI collected by Japan's Agency for Cultural Affairs.
Our analysis provides clearer visualization of the factors influencing divergent opinions on generative AI, offering deeper insights into the structures of agreement and disagreement.
- Score: 1.1874952582465599
- License:
- Abstract: Written texts reflect an author's perspective, making the thorough analysis of literature a key research method in fields such as the humanities and social sciences. However, conventional text mining techniques like sentiment analysis and topic modeling are limited in their ability to capture the hierarchical narrative structures that reveal deeper argumentative patterns. To address this gap, we propose a method that leverages large language models (LLMs) to extract and organize these structures into a hierarchical framework. We validate this approach by analyzing public opinions on generative AI collected by Japan's Agency for Cultural Affairs, comparing the narratives of supporters and critics. Our analysis provides clearer visualization of the factors influencing divergent opinions on generative AI, offering deeper insights into the structures of agreement and disagreement.
Related papers
- Comprehensive Study on Sentiment Analysis: From Rule-based to modern LLM based system [0.0]
This study examines the historical development of sentiment analysis, highlighting the transition from lexicon-based and pattern-based approaches to more sophisticated machine learning and deep learning models.
The paper reviews state-of-the-art approaches, identifies emerging trends, and outlines future research directions to advance the field.
arXiv Detail & Related papers (2024-09-16T04:44:52Z) - QuaLLM: An LLM-based Framework to Extract Quantitative Insights from Online Forums [10.684484559041284]
This study introduces QuaLLM, a novel framework to analyze and extract quantitative insights from text data on online forums.
We applied this framework to analyze over one million comments from two Reddit's rideshare worker communities.
arXiv Detail & Related papers (2024-05-08T18:20:03Z) - Concept Induction: Analyzing Unstructured Text with High-Level Concepts Using LLooM [16.488296856867937]
We introduce concept induction, a computational process that produces high-level concepts from unstructured text.
We present LLooM, a concept induction algorithm that leverages large language models to iteratively synthesize sampled text.
We find that LLooM's concepts improve upon the prior art of topic models in terms of quality and data coverage.
arXiv Detail & Related papers (2024-04-18T15:26:02Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Structured Like a Language Model: Analysing AI as an Automated Subject [0.0]
We argue the intentional fictional projection of subjectivity onto large language models can yield an alternate frame through which AI behaviour can be analysed.
We trace a brief history of language models, culminating in the releases of systems that realise state-of-the-art natural language processing performance.
We conclude that critical media methods and psychoanalytic theory together offer a productive frame for grasping the powerful new capacities of AI-driven language systems.
arXiv Detail & Related papers (2022-12-08T21:58:43Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review [52.359007622096684]
Peer review is a key component of the publishing process in most fields of science.
Existing NLP studies focus on the analysis of individual texts.
editorial assistance often requires modeling interactions between pairs of texts.
arXiv Detail & Related papers (2022-04-22T16:39:38Z) - MAIR: Framework for mining relationships between research articles,
strategies, and regulations in the field of explainable artificial
intelligence [2.280298858971133]
It is essential to understand the dynamics of the impact of regulation on research papers and AI-related policies.
This paper introduces a novel framework for joint analysis of AI-related policy documents and XAI research papers.
arXiv Detail & Related papers (2021-07-29T20:41:17Z) - Survey on Visual Sentiment Analysis [87.20223213370004]
This paper reviews pertinent publications and tries to present an exhaustive overview of the field of Visual Sentiment Analysis.
The paper also describes principles of design of general Visual Sentiment Analysis systems from three main points of view.
A formalization of the problem is discussed, considering different levels of granularity, as well as the components that can affect the sentiment toward an image in different ways.
arXiv Detail & Related papers (2020-04-24T10:15:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.