Network Formation and Dynamics Among Multi-LLMs
- URL: http://arxiv.org/abs/2402.10659v7
- Date: Sun, 05 Oct 2025 17:06:50 GMT
- Title: Network Formation and Dynamics Among Multi-LLMs
- Authors: Marios Papachristou, Yuan Yuan,
- Abstract summary: Social networks profoundly influence how humans form opinions, exchange information, and organize collectively.<n>As large language models (LLMs) are increasingly embedded into social and professional environments, it is critical to understand whether their interactions approximate human-like network dynamics.<n>We develop a framework to study the network formation behaviors of multiple LLM agents and benchmark them against human decisions.<n>We find that LLMs consistently reproduce fundamental micro-level principles such as preferential attachment, triadic closure, and homophily, as well as macro-level properties including community structure and small-world effects.
- Score: 5.513875403488053
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social networks profoundly influence how humans form opinions, exchange information, and organize collectively. As large language models (LLMs) are increasingly embedded into social and professional environments, it is critical to understand whether their interactions approximate human-like network dynamics. We develop a framework to study the network formation behaviors of multiple LLM agents and benchmark them against human decisions. Across synthetic and real-world settings, including friendship, telecommunication, and employment networks, we find that LLMs consistently reproduce fundamental micro-level principles such as preferential attachment, triadic closure, and homophily, as well as macro-level properties including community structure and small-world effects. Importantly, the relative emphasis of these principles adapts to context: for example, LLMs favor homophily in friendship networks but heterophily in organizational settings, mirroring patterns of social mobility. A controlled human-subject survey confirms strong alignment between LLMs and human participants in link-formation decisions. These results establish that LLMs can serve as powerful tools for social simulation and synthetic data generation, while also raising critical questions about bias, fairness, and the design of AI systems that participate in human networks.
Related papers
- Neural Synchrony Between Socially Interacting Language Models [52.74586779814636]
Large language models (LLMs) are widely accepted as powerful approximations of human behavior.<n>It remains controversial whether they can be meaningfully compared to human social minds.
arXiv Detail & Related papers (2026-02-19T20:33:54Z) - Social Simulations with Large Language Model Risk Utopian Illusion [61.358959720048354]
We introduce a systematic framework for analyzing large language models' behavior in social simulation.<n>Our approach simulates multi-agent interactions through chatroom-style conversations and analyzes them across five linguistic dimensions.<n>Our findings reveal that LLMs do not faithfully reproduce genuine human behavior but instead reflect overly idealized versions of it.
arXiv Detail & Related papers (2025-10-24T06:08:41Z) - Learning to Make Friends: Coaching LLM Agents toward Emergent Social Ties [3.1146704506932985]
Large language model (LLM) agents reproduce the complex social dynamics that characterize human online behavior.<n>We present a multi-agent LLM simulation framework in which agents repeatedly interact, evaluate one another, and adapt their behavior through in-context learning.
arXiv Detail & Related papers (2025-10-22T07:00:33Z) - Population-Aligned Persona Generation for LLM-based Social Simulation [58.84363795421489]
We propose a systematic framework for synthesizing high-quality, population-aligned persona sets for social simulation.<n>Our approach begins by leveraging large language models to generate narrative personas from long-term social media data.<n>To address the needs of specific simulation contexts, we introduce a task-specific module that adapts the globally aligned persona set to targeted subpopulations.
arXiv Detail & Related papers (2025-09-12T10:43:47Z) - DynamiX: Large-Scale Dynamic Social Network Simulator [101.65679342680542]
DynamiX is a novel large-scale social network simulator dedicated to dynamic social network modeling.<n>For opinion leaders, we propose an information-stream-based link prediction method recommending potential users with similar stances.<n>For ordinary users, we construct an inequality-oriented behavior decision-making module.
arXiv Detail & Related papers (2025-07-26T12:13:30Z) - LLMs are Introvert [21.542534041341774]
Large language models (LLMs) offer new potential for simulating psychological aspects of information spread.<n>Initial experiments revealed significant gaps between LLM-generated behaviors and authentic human dynamics.<n>We propose the Social Information Processing-based Chain of Thought (SIP-CoT) mechanism enhanced by emotion-guided memory.
arXiv Detail & Related papers (2025-07-08T03:32:38Z) - From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning [63.25540801694765]
Large Language Models (LLMs) demonstrate striking linguistic abilities, yet whether they achieve this same balance remains unclear.<n>We apply the Information Bottleneck principle to quantitatively compare how LLMs and humans navigate this compression-meaning trade-off.
arXiv Detail & Related papers (2025-05-21T16:29:00Z) - Characterizing LLM-driven Social Network: The Chirper.ai Case [24.057352135719555]
Large language models (LLMs) demonstrate the ability to simulate human decision-making processes.
This paper presents a large-scale analysis of Chirper.ai, an X/Twitter-like social network entirely populated by LLM agents.
We examine key differences between LLM agents and humans in posting behaviors, abusive content, and social network structures.
arXiv Detail & Related papers (2025-04-14T14:53:31Z) - DeepSeek-Inspired Exploration of RL-based LLMs and Synergy with Wireless Networks: A Survey [62.697565282841026]
Reinforcement learning (RL)-based large language models (LLMs) have gained significant attention.
Wireless networks require the empowerment of RL-based LLMs.
Wireless networks provide a vital infrastructure for the efficient training, deployment, and distributed inference of RL-based LLMs.
arXiv Detail & Related papers (2025-03-13T01:59:11Z) - Large Language Model Driven Agents for Simulating Echo Chamber Formation [5.6488384323017]
The rise of echo chambers on social media platforms has heightened concerns about polarization and the reinforcement of existing beliefs.<n>Traditional approaches for simulating echo chamber formation have often relied on predefined rules and numerical simulations.<n>We present a novel framework that leverages large language models (LLMs) as generative agents to simulate echo chamber dynamics.
arXiv Detail & Related papers (2025-02-25T12:05:11Z) - Emergence of human-like polarization among large language model agents [79.96817421756668]
We simulate a networked system involving thousands of large language model agents, discovering their social interactions, result in human-like polarization.<n>Similarities between humans and LLM agents raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate polarization and its consequences.
arXiv Detail & Related papers (2025-01-09T11:45:05Z) - A Survey on Large Language Models for Communication, Network, and Service Management: Application Insights, Challenges, and Future Directions [37.427638898804055]
Large Language Models (LLMs) have received tremendous attention due to their unparalleled capabilities in various Natural Language Processing (NLP) tasks.<n>This survey investigates the integration of LLMs across different communication network domains, including mobile networks and related technologies, vehicular networks, cloud-based networks, and fog/edge-based networks.
arXiv Detail & Related papers (2024-12-16T20:01:36Z) - Engagement-Driven Content Generation with Large Language Models [8.049552839071918]
Large Language Models (LLMs) exhibit significant persuasion capabilities in one-on-one interactions.
This study investigates the potential social impact of LLMs in interconnected users and complex opinion dynamics.
arXiv Detail & Related papers (2024-11-20T10:40:08Z) - Static network structure cannot stabilize cooperation among Large Language Model agents [6.868298200380496]
Large language models (LLMs) are increasingly used to model human social behavior.
This study aims to identify parallels in cooperative behavior between LLMs and humans.
arXiv Detail & Related papers (2024-11-15T15:52:15Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks [47.13391046553908]
In artificial networks, the effectiveness of these models relies on their ability to build task specific representation.
Prior studies highlight that different initializations can place networks in either a lazy regime, where representations remain static, or a rich/feature learning regime, where representations evolve dynamically.
These solutions capture the evolution of representations and the Neural Kernel across the spectrum from the rich to the lazy regimes.
arXiv Detail & Related papers (2024-09-22T23:19:04Z) - LLMs generate structurally realistic social networks but overestimate political homophily [42.229210482614356]
We develop three prompting methods for network generation and compare the generated networks to real social networks.
We find that more realistic networks are generated with "local" methods, where the LLM constructs relations for one persona at a time.
We also find that the generated networks match real networks on many characteristics, including density, clustering, community structure, and degree.
arXiv Detail & Related papers (2024-08-29T15:36:52Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Do LLM Agents Exhibit Social Behavior? [5.094340963261968]
State-Understanding-Value-Action (SUVA) is a framework to systematically analyze responses in social contexts.
It assesses social behavior through both their final decisions and the response generation processes leading to those decisions.
We demonstrate that utterance-based reasoning reliably predicts LLMs' final actions.
arXiv Detail & Related papers (2023-12-23T08:46:53Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View [60.80731090755224]
This paper probes the collaboration mechanisms among contemporary NLP systems by practical experiments with theoretical insights.
We fabricate four unique societies' comprised of LLM agents, where each agent is characterized by a specific trait' (easy-going or overconfident) and engages in collaboration with a distinct thinking pattern' (debate or reflection)
Our results further illustrate that LLM agents manifest human-like social behaviors, such as conformity and consensus reaching, mirroring social psychology theories.
arXiv Detail & Related papers (2023-10-03T15:05:52Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z) - Influence of External Information on Large Language Models Mirrors
Social Cognitive Patterns [51.622612759892775]
Social cognitive theory explains how people learn and acquire knowledge through observing others.
Recent years have witnessed the rapid development of large language models (LLMs)
LLMs, as AI agents, can observe external information, which shapes their cognition and behaviors.
arXiv Detail & Related papers (2023-05-08T16:10:18Z) - Reward-Sharing Relational Networks in Multi-Agent Reinforcement Learning
as a Framework for Emergent Behavior [0.0]
We integrate social' interactions into the MARL setup through a user-defined relational network.
We examine the effects of agent-agent relations on the rise of emergent behaviors.
arXiv Detail & Related papers (2022-07-12T23:27:42Z) - Social learning spontaneously emerges by searching optimal heuristics
with deep reinforcement learning [0.0]
We employ a deep reinforcement learning model to optimize the social learning strategies of agents in a cooperative game in a multi-dimensional landscape.
We find that the agent spontaneously learns various concepts of social learning, such as copying, focusing on frequent and well-performing neighbors, self-comparison, and the importance of balancing between individual and social learning.
We demonstrate the superior performance of the reinforcement learning agent in various environments, including temporally changing environments and real social networks.
arXiv Detail & Related papers (2022-04-26T15:10:27Z) - From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks [71.23327876898816]
Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
arXiv Detail & Related papers (2020-06-07T05:11:18Z) - I Know Where You Are Coming From: On the Impact of Social Media Sources
on AI Model Performance [79.05613148641018]
We will study the performance of different machine learning models when being learned on multi-modal data from different social networks.
Our initial experimental results reveal that social network choice impacts the performance.
arXiv Detail & Related papers (2020-02-05T11:10:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.