Group size effects and collective misalignment in LLM multi-agent systems
- URL: http://arxiv.org/abs/2510.22422v1
- Date: Sat, 25 Oct 2025 19:45:45 GMT
- Title: Group size effects and collective misalignment in LLM multi-agent systems
- Authors: Ariel Flint, Luca Maria Aiello, Romualdo Pastor-Satorras, Andrea Baronchelli,
- Abstract summary: We show that collective bias is a deeper phenomenon than previously assessed.<n>We demonstrate that group size affects the dynamics in a non-linear way.<n>These findings establish group size as a key driver of multi-agent dynamics.
- Score: 0.0874967598360817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-agent systems of large language models (LLMs) are rapidly expanding across domains, introducing dynamics not captured by single-agent evaluations. Yet, existing work has mostly contrasted the behavior of a single agent with that of a collective of fixed size, leaving open a central question: how does group size shape dynamics? Here, we move beyond this dichotomy and systematically explore outcomes across the full range of group sizes. We focus on multi-agent misalignment, building on recent evidence that interacting LLMs playing a simple coordination game can generate collective biases absent in individual models. First, we show that collective bias is a deeper phenomenon than previously assessed: interaction can amplify individual biases, introduce new ones, or override model-level preferences. Second, we demonstrate that group size affects the dynamics in a non-linear way, revealing model-dependent dynamical regimes. Finally, we develop a mean-field analytical approach and show that, above a critical population size, simulations converge to deterministic predictions that expose the basins of attraction of competing equilibria. These findings establish group size as a key driver of multi-agent dynamics and highlight the need to consider population-level effects when deploying LLM-based systems at scale.
Related papers
- SIGMAS: Second-Order Interaction-based Grouping for Overlapping Multi-Agent Swarms [12.265270375417517]
We introduce the novel task of group prediction in overlapping multi-agent swarms.<n>We propose SIGMAS (Second-order Interaction-based Grouping for Multi-Agent Swarms), a self-supervised framework for group inference.<n>We show that SIGMAS accurately recovers latent group structures and remains robust under simultaneously overlapping swarm dynamics.
arXiv Detail & Related papers (2026-02-23T01:43:56Z) - Diffusion Forcing for Multi-Agent Interaction Sequence Modeling [52.769202433667125]
MAGNet is a unified autoregressive diffusion framework for multi-agent motion generation.<n>It supports a wide range of interaction tasks through flexible conditioning and sampling.<n>It captures both tightly synchronized activities and loosely structured social interactions.
arXiv Detail & Related papers (2025-12-19T18:59:02Z) - The Social Cost of Intelligence: Emergence, Propagation, and Amplification of Stereotypical Bias in Multi-Agent Systems [20.359327253718718]
Bias in large language models (LLMs) remains a persistent challenge, manifesting in stereotyping and unfair treatment across social groups.<n>We study how internal specialization, underlying LLMs and inter-agent communication protocols influence bias robustness, propagation, and amplification.<n>Our findings highlight critical factors shaping fairness and resilience in multi-agent LLM systems.
arXiv Detail & Related papers (2025-10-13T02:56:42Z) - Emergent Coordination in Multi-Agent Language Models [2.504366738288215]
We introduce an information-theoretic framework to test whether multi-agent systems show signs of higher-order structure.<n>This information decomposition lets us measure whether dynamical emergence is present in multi-agent LLM systems.<n>We apply our framework to experiments using a simple guessing game without direct agent communication.
arXiv Detail & Related papers (2025-10-05T11:26:41Z) - Herd Behavior: Investigating Peer Influence in LLM-based Multi-Agent Systems [7.140644659869317]
We investigate the dynamics of peer influence in multi-agent systems based on Large Language Models (LLMs)<n>We show that the gap between self-confidence and perceived confidence in peers significantly impacts an agent's likelihood to conform.<n>We find that the format in which peer information is presented plays a critical role in modulating the strength of herd behavior.
arXiv Detail & Related papers (2025-05-27T12:12:56Z) - Multi-Agent Collaboration via Evolving Orchestration [55.574417128944226]
Large language models (LLMs) have achieved remarkable results across diverse downstream tasks, but their monolithic nature restricts scalability and efficiency in complex problem-solving.<n>We propose a puppeteer-style paradigm for LLM-based multi-agent collaboration, where a centralized orchestrator ("puppeteer") dynamically directs agents ("puppets") in response to evolving task states.<n> Experiments on closed- and open-domain scenarios show that this method achieves superior performance with reduced computational costs.
arXiv Detail & Related papers (2025-05-26T07:02:17Z) - HiddenBench: Assessing Collective Reasoning in Multi-Agent LLMs via Hidden Profile Tasks [12.203366267017737]
We introduce HiddenBench, the first benchmark for evaluating collective reasoning in multi-agent LLMs.<n>To ground the benchmark, we formalize the paradigm with custom tasks and show that GPT-4.1 groups fail to integrate distributed knowledge.<n>We then construct the full benchmark, spanning 65 tasks drawn from custom designs, prior human studies, and automatic generation.
arXiv Detail & Related papers (2025-05-15T19:22:54Z) - MF-LLM: Simulating Population Decision Dynamics via a Mean-Field Large Language Model Framework [53.82097200295448]
Mean-Field LLM (MF-LLM) is first to incorporate mean field theory into social simulation.<n>MF-LLM models bidirectional interactions between individuals and the population through an iterative process.<n> IB-Tune is a novel fine-tuning method inspired by the Information Bottleneck principle.
arXiv Detail & Related papers (2025-04-30T12:41:51Z) - Discovering group dynamics in coordinated time series via hierarchical recurrent switching-state models [5.250223406627639]
We seek a computationally efficient model for a collection of time series arising from multiple interacting entities (a.k.a. "agents")<n>Recent models of temporal patterns across individuals fail to incorporate explicit system-level collective behavior that can influence the trajectories of individual entities.<n>We employ a latent system-level discrete state Markov chain that provides top-down influence on latent entity-level chains which in turn govern the emission of each observed time series.
arXiv Detail & Related papers (2024-01-26T16:06:01Z) - Rethinking Trajectory Prediction via "Team Game" [118.59480535826094]
We present a novel formulation for multi-agent trajectory prediction, which explicitly introduces the concept of interactive group consensus.
On two multi-agent settings, i.e. team sports and pedestrians, the proposed framework consistently achieves superior performance compared to existing methods.
arXiv Detail & Related papers (2022-10-17T07:16:44Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - GEM: Group Enhanced Model for Learning Dynamical Control Systems [78.56159072162103]
We build effective dynamical models that are amenable to sample-based learning.
We show that learning the dynamics on a Lie algebra vector space is more effective than learning a direct state transition model.
This work sheds light on a connection between learning of dynamics and Lie group properties, which opens doors for new research directions.
arXiv Detail & Related papers (2021-04-07T01:08:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.