Too Open for Opinion? Embracing Open-Endedness in Large Language Models for Social Simulation
- URL: http://arxiv.org/abs/2510.13884v1
- Date: Tue, 14 Oct 2025 01:40:21 GMT
- Title: Too Open for Opinion? Embracing Open-Endedness in Large Language Models for Social Simulation
- Authors: Bolei Ma, Yong Cao, Indira Sen, Anna-Carolina Haensch, Frauke Kreuter, Barbara Plank, Daniel Hershcovich,
- Abstract summary: Large Language Models (LLMs) are increasingly used to simulate public opinion and other social phenomena.<n>Most current studies constrain these simulations to multiple-choice or short-answer formats for ease of scoring and comparison.<n>We argue that open-endedness, using free-form text that captures topics, viewpoints, and reasoning processes "in" LLMs, is essential for realistic social simulation.
- Score: 45.59217976434971
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) are increasingly used to simulate public opinion and other social phenomena. Most current studies constrain these simulations to multiple-choice or short-answer formats for ease of scoring and comparison, but such closed designs overlook the inherently generative nature of LLMs. In this position paper, we argue that open-endedness, using free-form text that captures topics, viewpoints, and reasoning processes "in" LLMs, is essential for realistic social simulation. Drawing on decades of survey-methodology research and recent advances in NLP, we argue why this open-endedness is valuable in LLM social simulations, showing how it can improve measurement and design, support exploration of unanticipated views, and reduce researcher-imposed directive bias. It also captures expressiveness and individuality, aids in pretesting, and ultimately enhances methodological utility. We call for novel practices and evaluation frameworks that leverage rather than constrain the open-ended generative diversity of LLMs, creating synergies between NLP and social science.
Related papers
- Leveraging LLM-based agents for social science research: insights from citation network simulations [132.4334196445918]
We introduce the CiteAgent framework, designed to generate citation networks based on human-behavior simulation.<n>CiteAgent captures predominant phenomena in real-world citation networks, including power-law distribution, citational distortion, and shrinking diameter.<n>We establish two LLM-based research paradigms in social science, allowing us to validate and challenge existing theories.
arXiv Detail & Related papers (2025-11-05T08:47:04Z) - Social Simulations with Large Language Model Risk Utopian Illusion [61.358959720048354]
We introduce a systematic framework for analyzing large language models' behavior in social simulation.<n>Our approach simulates multi-agent interactions through chatroom-style conversations and analyzes them across five linguistic dimensions.<n>Our findings reveal that LLMs do not faithfully reproduce genuine human behavior but instead reflect overly idealized versions of it.
arXiv Detail & Related papers (2025-10-24T06:08:41Z) - Population-Aligned Persona Generation for LLM-based Social Simulation [58.84363795421489]
We propose a systematic framework for synthesizing high-quality, population-aligned persona sets for social simulation.<n>Our approach begins by leveraging large language models to generate narrative personas from long-term social media data.<n>To address the needs of specific simulation contexts, we introduce a task-specific module that adapts the globally aligned persona set to targeted subpopulations.
arXiv Detail & Related papers (2025-09-12T10:43:47Z) - Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges [0.7739037410679168]
The paper reviews recent findings on the ability of Large Language Models to replicate key aspects of human cognition.<n>The second part surveys emerging applications of LLMs in multi-agent simulation frameworks.<n>The paper concludes by advocating for hybrid approaches that integrate LLMs into traditional agent-based modeling platforms.
arXiv Detail & Related papers (2025-07-25T15:15:35Z) - LLM Social Simulations Are a Promising Research Method [4.6456873975541635]
We argue that the promise of LLM social simulations can be achieved by addressing five tractable challenges.<n>We believe that LLM social simulations can already be used for pilot and exploratory studies.<n>Researchers should prioritize developing conceptual models and iterative evaluations to make the best use of new AI systems.
arXiv Detail & Related papers (2025-04-03T03:01:26Z) - Large Language Model Driven Agents for Simulating Echo Chamber Formation [5.6488384323017]
The rise of echo chambers on social media platforms has heightened concerns about polarization and the reinforcement of existing beliefs.<n>Traditional approaches for simulating echo chamber formation have often relied on predefined rules and numerical simulations.<n>We present a novel framework that leverages large language models (LLMs) as generative agents to simulate echo chamber dynamics.
arXiv Detail & Related papers (2025-02-25T12:05:11Z) - Designing Domain-Specific Large Language Models: The Critical Role of Fine-Tuning in Public Opinion Simulation [0.0]
This paper introduces a novel fine-tuning approach that integrates socio-demographic data from the UK Household Longitudinal Study.<n>By emulating diverse synthetic profiles, the fine-tuned models significantly outperform pre-trained counterparts.<n>Its broader implications include deploying LLMs in domains like healthcare and education, fostering inclusive and data-driven decision-making.
arXiv Detail & Related papers (2024-09-28T10:39:23Z) - Social Debiasing for Fair Multi-modal LLMs [59.61512883471714]
Multi-modal Large Language Models (MLLMs) have dramatically advanced the research field and delivered powerful vision-language understanding capabilities.<n>These models often inherit deep-rooted social biases from their training data, leading to uncomfortable responses with respect to attributes such as race and gender.<n>This paper addresses the issue of social biases in MLLMs by introducing a comprehensive counterfactual dataset with multiple social concepts.
arXiv Detail & Related papers (2024-08-13T02:08:32Z) - Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View [60.80731090755224]
This paper probes the collaboration mechanisms among contemporary NLP systems by practical experiments with theoretical insights.
We fabricate four unique societies' comprised of LLM agents, where each agent is characterized by a specific trait' (easy-going or overconfident) and engages in collaboration with a distinct thinking pattern' (debate or reflection)
Our results further illustrate that LLM agents manifest human-like social behaviors, such as conformity and consensus reaching, mirroring social psychology theories.
arXiv Detail & Related papers (2023-10-03T15:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.