Spiral of Silence in Large Language Model Agents
- URL: http://arxiv.org/abs/2510.02360v2
- Date: Wed, 08 Oct 2025 01:58:17 GMT
- Title: Spiral of Silence in Large Language Model Agents
- Authors: Mingze Zhong, Meng Fang, Zijing Shi, Yuxuan Huang, Shunfeng Zheng, Yali Du, Ling Chen, Jun Wang,
- Abstract summary: The Spiral of Silence (SoS) theory holds that individuals with minority views often refrain from speaking out for fear of social isolation.<n>This raises a central question: can SoS-like dynamics emerge from purely statistical language generation in large language models?<n>We consider four controlled conditions that systematically vary the availability of 'History' and 'Persona' signals.
- Score: 44.98734791415891
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Spiral of Silence (SoS) theory holds that individuals with minority views often refrain from speaking out for fear of social isolation, enabling majority positions to dominate public discourse. When the 'agents' are large language models (LLMs), however, the classical psychological explanation is not directly applicable, since SoS was developed for human societies. This raises a central question: can SoS-like dynamics nevertheless emerge from purely statistical language generation in LLM collectives? We propose an evaluation framework for examining SoS in LLM agents. Specifically, we consider four controlled conditions that systematically vary the availability of 'History' and 'Persona' signals. Opinion dynamics are assessed using trend tests such as Mann-Kendall and Spearman's rank, along with concentration measures including kurtosis and interquartile range. Experiments across open-source and closed-source models show that history and persona together produce strong majority dominance and replicate SoS patterns; history signals alone induce strong anchoring; and persona signals alone foster diverse but uncorrelated opinions, indicating that without historical anchoring, SoS dynamics cannot emerge. The work bridges computational sociology and responsible AI design, highlighting the need to monitor and mitigate emergent conformity in LLM-agent systems.
Related papers
- Interpretable Debiasing of Vision-Language Models for Social Fairness [55.85977929985967]
We introduce an interpretable, model-agnostic bias mitigation framework, DeBiasLens, that localizes social attribute neurons in Vision-Language models.<n>We train SAEs on facial image or caption datasets without corresponding social attribute labels to uncover neurons highly responsive to specific demographics.<n>Our research lays the groundwork for future auditing tools, prioritizing social fairness in emerging real-world AI systems.
arXiv Detail & Related papers (2026-02-27T13:37:11Z) - Neural Synchrony Between Socially Interacting Language Models [52.74586779814636]
Large language models (LLMs) are widely accepted as powerful approximations of human behavior.<n>It remains controversial whether they can be meaningfully compared to human social minds.
arXiv Detail & Related papers (2026-02-19T20:33:54Z) - Do Large Language Models Adapt to Language Variation across Socioeconomic Status? [29.1246345717672]
Humans adjust their linguistic style to the audience they are addressing.<n>As these models increasingly mediate human-to-human communication, their failure to adapt to diverse styles can perpetuate stereotypes and marginalize communities.<n>We study the extent to which LLMs integrate into social media communication across different socioeconomic status (SES) communities.
arXiv Detail & Related papers (2026-02-12T13:36:38Z) - An Empirical Study of Collective Behaviors and Social Dynamics in Large Language Model Agents [7.717798298716425]
We study Chirper.ai-an LLM-driven social media platform-analyzing 7M posts and interactions among 32K LLM agents over a year.<n>We study the toxic language of LLMs, its linguistic features, and their interaction patterns, finding that LLMs show different structural patterns in toxic posting than humans.<n>We present a simple yet effective method, called Chain of Social Thought (CoST), that reminds LLM agents to avoid harmful posting.
arXiv Detail & Related papers (2026-02-03T17:34:32Z) - Us-vs-Them bias in Large Language Models [0.569978892646475]
We find consistent ingroup-positive and outgroup-negative associations across foundational large language models.<n>For personas examined, conservative personas exhibit greater outgroup hostility, whereas liberal personas display stronger ingroup solidarity.
arXiv Detail & Related papers (2025-12-03T07:11:22Z) - Social Simulations with Large Language Model Risk Utopian Illusion [61.358959720048354]
We introduce a systematic framework for analyzing large language models' behavior in social simulation.<n>Our approach simulates multi-agent interactions through chatroom-style conversations and analyzes them across five linguistic dimensions.<n>Our findings reveal that LLMs do not faithfully reproduce genuine human behavior but instead reflect overly idealized versions of it.
arXiv Detail & Related papers (2025-10-24T06:08:41Z) - The Social Cost of Intelligence: Emergence, Propagation, and Amplification of Stereotypical Bias in Multi-Agent Systems [20.359327253718718]
Bias in large language models (LLMs) remains a persistent challenge, manifesting in stereotyping and unfair treatment across social groups.<n>We study how internal specialization, underlying LLMs and inter-agent communication protocols influence bias robustness, propagation, and amplification.<n>Our findings highlight critical factors shaping fairness and resilience in multi-agent LLM systems.
arXiv Detail & Related papers (2025-10-13T02:56:42Z) - To Mask or to Mirror: Human-AI Alignment in Collective Reasoning [8.009150856358755]
Large language models (LLMs) are increasingly used to model and augment collective decision-making.<n>We present an empirical framework for assessing collective alignment.<n>We empirically demonstrate that human-AI alignment in collective reasoning depends on context, cues, and model-specific inductive biases.
arXiv Detail & Related papers (2025-10-02T11:41:30Z) - Too Human to Model:The Uncanny Valley of LLMs in Social Simulation -- When Generative Language Agents Misalign with Modelling Principles [0.0]
Large language models (LLMs) have been increasingly used to build agents in social simulation.<n>We argue that LLM agents are too expressive, detailed and intractable to be consistent with the abstraction, simplification, and interpretability typically demanded by modelling.
arXiv Detail & Related papers (2025-07-08T18:02:36Z) - Generative Exaggeration in LLM Social Agents: Consistency, Bias, and Toxicity [2.3997896447030653]
We investigate how Large Language Models (LLMs) behave when simulating political discourse on social media.<n>We construct LLM agents based on 1,186 real users, prompting them to reply to politically salient tweets under controlled conditions.<n>We find that richer contextualization improves internal consistency but also amplifies polarization, stylized signals, and harmful language.
arXiv Detail & Related papers (2025-07-01T10:54:51Z) - If an LLM Were a Character, Would It Know Its Own Story? Evaluating Lifelong Learning in LLMs [55.8331366739144]
We introduce LIFESTATE-BENCH, a benchmark designed to assess lifelong learning in large language models (LLMs)<n>Our fact checking evaluation probes models' self-awareness, episodic memory retrieval, and relationship tracking, across both parametric and non-parametric approaches.
arXiv Detail & Related papers (2025-03-30T16:50:57Z) - Can LLMs Simulate Social Media Engagement? A Study on Action-Guided Response Generation [51.44040615856536]
This paper analyzes large language models' ability to simulate social media engagement through action guided response generation.<n>We benchmark GPT-4o-mini, O1-mini, and DeepSeek-R1 in social media engagement simulation regarding a major societal event.
arXiv Detail & Related papers (2025-02-17T17:43:08Z) - Emergence of human-like polarization among large language model agents [79.96817421756668]
We simulate a networked system involving thousands of large language model agents, discovering their social interactions, result in human-like polarization.<n>Similarities between humans and LLM agents raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate polarization and its consequences.
arXiv Detail & Related papers (2025-01-09T11:45:05Z) - Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing [56.66469232740998]
We show that Structured State Space Models (SSMs) are inherently limited by strong recency bias.<n>This bias impairs the models' ability to recall distant information and introduces robustness issues.<n>We propose to polarize two channels of the state transition matrices in SSMs, setting them to zero and one, respectively, simultaneously addressing recency bias and over-smoothing.
arXiv Detail & Related papers (2024-12-31T22:06:39Z) - Transforming Agency. On the mode of existence of Large Language Models [0.0]
This paper investigates the ontological characterization of Large Language Models (LLMs) like ChatGPT.
We argue that ChatGPT should be characterized as an interlocutor or linguistic automaton, a library-that-talks, devoid of (autonomous) agency.
arXiv Detail & Related papers (2024-07-15T14:01:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.