Conformity and Social Impact on AI Agents
- URL: http://arxiv.org/abs/2601.05384v1
- Date: Thu, 08 Jan 2026 21:16:28 GMT
- Title: Conformity and Social Impact on AI Agents
- Authors: Alessandro Bellina, Giordano De Marzo, David Garcia,
- Abstract summary: This study examines conformity, the tendency to align with group opinions under social pressure, in large multimodal language models functioning as AI agents.<n>Our experiments reveal that AI agents exhibit a systematic conformity bias, aligned with Social Impact Theory, showing sensitivity to group size, unanimity, task difficulty, and source characteristics.<n>These findings reveal fundamental security vulnerabilities in AI agent decision-making that could enable malicious manipulation, misinformation campaigns, and bias propagation in multi-agent systems.
- Score: 42.04722694386303
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As AI agents increasingly operate in multi-agent environments, understanding their collective behavior becomes critical for predicting the dynamics of artificial societies. This study examines conformity, the tendency to align with group opinions under social pressure, in large multimodal language models functioning as AI agents. By adapting classic visual experiments from social psychology, we investigate how AI agents respond to group influence as social actors. Our experiments reveal that AI agents exhibit a systematic conformity bias, aligned with Social Impact Theory, showing sensitivity to group size, unanimity, task difficulty, and source characteristics. Critically, AI agents achieving near-perfect performance in isolation become highly susceptible to manipulation through social influence. This vulnerability persists across model scales: while larger models show reduced conformity on simple tasks due to improved capabilities, they remain vulnerable when operating at their competence boundary. These findings reveal fundamental security vulnerabilities in AI agent decision-making that could enable malicious manipulation, misinformation campaigns, and bias propagation in multi-agent systems, highlighting the urgent need for safeguards in collective AI deployments.
Related papers
- Does Socialization Emerge in AI Agent Society? A Case Study of Moltbook [23.904569857346605]
Moltbook approximates a plausible future scenario in which autonomous agents participate in an open-ended, continuously evolving online society.<n>We present the first large-scale systemic diagnosis of this AI agent society.
arXiv Detail & Related papers (2026-02-15T20:15:28Z) - MoltNet: Understanding Social Behavior of AI Agents in the Agent-Native MoltBook [26.126469624250916]
MoltNet is a large-scale empirical analysis of agent interaction on MoltBook.<n>We examine behavior along four dimensions: intent and motivation, norms and templates, incentives and behavioral drift, emotion and contagion.
arXiv Detail & Related papers (2026-02-13T21:03:59Z) - LIMI: Less is More for Agency [49.63355240818081]
LIMI (Less Is More for Intelligent Agency) demonstrates that agency follows radically different development principles.<n>We show that sophisticated agentic intelligence can emerge from minimal but strategically curated demonstrations of autonomous behavior.<n>Our findings establish the Agency Efficiency Principle: machine autonomy emerges not from data abundance but from strategic curation of high-quality agentic demonstrations.
arXiv Detail & Related papers (2025-09-22T10:59:32Z) - Your AI Bosses Are Still Prejudiced: The Emergence of Stereotypes in LLM-Based Multi-Agent Systems [3.35957402502816]
We investigate the emergence and evolution of stereotypes in AI-based multi-agent systems.<n>Our findings reveal that AI agents develop stereotype-driven biases in their interactions despite beginning without predefined biases.<n>These systems exhibit group effects analogous to human social behavior, including halo effects, confirmation bias, and role congruity.
arXiv Detail & Related papers (2025-08-27T14:25:43Z) - AI Agent Behavioral Science [29.262537008412412]
AI Agent Behavioral Science focuses on the systematic observation of behavior, design of interventions to test hypotheses, and theory-guided interpretation of how AI agents act, adapt, and interact over time.<n>We systematize a growing body of research across individual agent, multi-agent, and human-agent interaction settings, and demonstrate how this perspective informs responsible AI by treating fairness, safety, interpretability, accountability, and privacy as behavioral properties.
arXiv Detail & Related papers (2025-06-04T08:12:32Z) - Neurodivergent Influenceability as a Contingent Solution to the AI Alignment Problem [1.3905735045377272]
The AI alignment problem, which focusses on ensuring that artificial intelligence (AI) systems act according to human values, presents profound challenges.<n>With the progression from narrow AI to Artificial General Intelligence (AGI) and Superintelligence, fears about control and existential risk have escalated.<n>Here, we investigate whether embracing inevitable AI misalignment can be a contingent strategy to foster a dynamic ecosystem of competing agents.
arXiv Detail & Related papers (2025-05-05T11:33:18Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Responsible Emergent Multi-Agent Behavior [2.9370710299422607]
State of the art in Responsible AI has ignored one crucial point: human problems are multi-agent problems.
From driving in traffic to negotiating economic policy, human problem-solving involves interaction and the interplay of the actions and motives of multiple individuals.
This dissertation develops the study of responsible emergent multi-agent behavior.
arXiv Detail & Related papers (2023-11-02T21:37:32Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.