SCALE: Towards Collaborative Content Analysis in Social Science with Large Language Model Agents and Human Intervention
- URL: http://arxiv.org/abs/2502.10937v1
- Date: Sun, 16 Feb 2025 00:19:07 GMT
- Title: SCALE: Towards Collaborative Content Analysis in Social Science with Large Language Model Agents and Human Intervention
- Authors: Chengshuai Zhao, Zhen Tan, Chau-Wai Wong, Xinyan Zhao, Tianlong Chen, Huan Liu,
- Abstract summary: We introduce a novel multi-agent framework that effectively.<n>imulates $underlinetextbfC$ontent $underlinetextbfA$nalysis via.<n>underlinetextbfL$arge language model (LLM) agunderlinetextbfE$nts.<n>It imitates key phases of content analysis, including text coding, collaborative discussion, and dynamic codebook evolution.
- Score: 50.07342730395946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Content analysis breaks down complex and unstructured texts into theory-informed numerical categories. Particularly, in social science, this process usually relies on multiple rounds of manual annotation, domain expert discussion, and rule-based refinement. In this paper, we introduce SCALE, a novel multi-agent framework that effectively $\underline{\textbf{S}}$imulates $\underline{\textbf{C}}$ontent $\underline{\textbf{A}}$nalysis via $\underline{\textbf{L}}$arge language model (LLM) ag$\underline{\textbf{E}}$nts. SCALE imitates key phases of content analysis, including text coding, collaborative discussion, and dynamic codebook evolution, capturing the reflective depth and adaptive discussions of human researchers. Furthermore, by integrating diverse modes of human intervention, SCALE is augmented with expert input to further enhance its performance. Extensive evaluations on real-world datasets demonstrate that SCALE achieves human-approximated performance across various complex content analysis tasks, offering an innovative potential for future social science research.
Related papers
- Evaluating LLM-based Agents for Multi-Turn Conversations: A Survey [64.08485471150486]
This survey examines evaluation methods for large language model (LLM)-based agents in multi-turn conversational settings.
We systematically reviewed nearly 250 scholarly sources, capturing the state of the art from various venues of publication.
arXiv Detail & Related papers (2025-03-28T14:08:40Z) - Advancements in Natural Language Processing for Automatic Text Summarization [0.0]
Authors explored existing hybrid techniques that have employed both extractive and abstractive methodologies.
Process of summarizing textual information continues to be significantly constrained by the intricate writing styles of a variety of texts.
arXiv Detail & Related papers (2025-02-27T05:17:36Z) - BEYONDWORDS is All You Need: Agentic Generative AI based Social Media Themes Extractor [2.699900017799093]
Thematic analysis of social media posts provides a major understanding of public discourse.
Traditional methods often struggle to capture the complexity and nuance of unstructured, large-scale text data.
This study introduces a novel methodology for thematic analysis that integrates tweet embeddings from pre-trained language models.
arXiv Detail & Related papers (2025-02-26T18:18:37Z) - GeAR: Generation Augmented Retrieval [82.20696567697016]
Document retrieval techniques form the foundation for the development of large-scale information systems.<n>The prevailing methodology is to construct a bi-encoder and compute the semantic similarity.<n>We propose a new method called $textbfGe$neration that incorporates well-designed fusion and decoding modules.
arXiv Detail & Related papers (2025-01-06T05:29:00Z) - DEMO: Reframing Dialogue Interaction with Fine-grained Element Modeling [73.08187964426823]
Large language models (LLMs) have made dialogue one of the central modes in human-machine interaction.<n>Despite the large volumes of dialogue-related studies, there is a lack of benchmark that encompasses comprehensive dialogue elements.<n>We introduce a new research task $textbfD$ialogue $textbfE$lement $textbfMO$deling, including $textitElement Awareness$ and $textitDialogue Agent Interaction$.
arXiv Detail & Related papers (2024-12-06T10:01:38Z) - Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions [62.0123588983514]
Large Language Models (LLMs) have demonstrated wide-ranging applications across various fields.
We reformulate the peer-review process as a multi-turn, long-context dialogue, incorporating distinct roles for authors, reviewers, and decision makers.
We construct a comprehensive dataset containing over 26,841 papers with 92,017 reviews collected from multiple sources.
arXiv Detail & Related papers (2024-06-09T08:24:17Z) - QuaLLM: An LLM-based Framework to Extract Quantitative Insights from Online Forums [10.684484559041284]
This study introduces QuaLLM, a novel framework to analyze and extract quantitative insights from text data on online forums.<n>We applied this framework to analyze over one million comments from two of Reddit's rideshare worker communities.<n>We uncover significant worker concerns regarding AI and algorithmic platform decisions, responding to regulatory calls about worker insights.
arXiv Detail & Related papers (2024-05-08T18:20:03Z) - Not Enough Labeled Data? Just Add Semantics: A Data-Efficient Method for
Inferring Online Health Texts [0.0]
We employ Abstract Representation (AMR) graphs as a means to model low-resource Health NLP tasks.
AMRs are well suited to model online health texts as they represent multi-sentence inputs, abstract away from complex terminology, and model long-distance relationships.
Our experiments show that we can improve performance on 6 low-resource health NLP tasks by augmenting text embeddings with semantic graph embeddings.
arXiv Detail & Related papers (2023-09-18T15:37:30Z) - Sequential annotations for naturally-occurring HRI: first insights [0.0]
We explain the methodology we developed for improving the interactions accomplished by an embedded conversational agent.
We are creating a corpus of naturally-occurring interactions that will be made available to the community.
arXiv Detail & Related papers (2023-08-29T08:07:26Z) - Multi-Dimensional Evaluation of Text Summarization with In-Context
Learning [79.02280189976562]
In this paper, we study the efficacy of large language models as multi-dimensional evaluators using in-context learning.
Our experiments show that in-context learning-based evaluators are competitive with learned evaluation frameworks for the task of text summarization.
We then analyze the effects of factors such as the selection and number of in-context examples on performance.
arXiv Detail & Related papers (2023-06-01T23:27:49Z) - Large Language Models are Diverse Role-Players for Summarization
Evaluation [82.31575622685902]
A document summary's quality can be assessed by human annotators on various criteria, both objective ones like grammar and correctness, and subjective ones like informativeness, succinctness, and appeal.
Most of the automatic evaluation methods like BLUE/ROUGE may be not able to adequately capture the above dimensions.
We propose a new evaluation framework based on LLMs, which provides a comprehensive evaluation framework by comparing generated text and reference text from both objective and subjective aspects.
arXiv Detail & Related papers (2023-03-27T10:40:59Z) - ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational
Finance Question Answering [70.6359636116848]
We propose a new large-scale dataset, ConvFinQA, to study the chain of numerical reasoning in conversational question answering.
Our dataset poses great challenge in modeling long-range, complex numerical reasoning paths in real-world conversations.
arXiv Detail & Related papers (2022-10-07T23:48:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.