Network Formation and Dynamics Among Multi-LLMs
- URL: http://arxiv.org/abs/2402.10659v3
- Date: Sun, 2 Jun 2024 13:50:14 GMT
- Title: Network Formation and Dynamics Among Multi-LLMs
- Authors: Marios Papachristou, Yuan Yuan,
- Abstract summary: We show that large language models (LLMs) exhibit key social network principles when asked about their preferences in network formation.
We also investigate LLMs' decision-making based on real-world networks, revealing that triadic closure and homophily have a stronger influence than preferential attachment.
- Score: 5.8418144988203915
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social networks shape opinions, behaviors, and information dissemination in human societies. As large language models (LLMs) increasingly integrate into social and professional environments, understanding their behavior within the context of social interactions and networks becomes essential. Our study analyzes LLMs' network formation behavior to examine whether the dynamics of multiple LLMs are similar to or different from human social dynamics. We observe that LLMs exhibit key social network principles, including preferential attachment, triadic closure, homophily, community structure, and the small-world phenomenon, when asked about their preferences in network formation. We also investigate LLMs' decision-making based on real-world networks, revealing that triadic closure and homophily have a stronger influence than preferential attachment and that LLMs perform well in network formation predictions. Overall, our study opens up new possibilities for using LLMs in network science research and helps develop socially aware LLMs by shedding light on their social interaction behaviors and exploring their impacts on social dynamics.
Related papers
- A Survey on Large Language Models for Communication, Network, and Service Management: Application Insights, Challenges, and Future Directions [37.427638898804055]
Large Language Models (LLMs) have received tremendous attention due to their unparalleled capabilities in various Natural Language Processing (NLP) tasks.
This survey investigates the integration of LLMs across different communication network domains, including mobile networks and related technologies, vehicular networks, cloud-based networks, and fog/edge-based networks.
arXiv Detail & Related papers (2024-12-16T20:01:36Z) - Engagement-Driven Content Generation with Large Language Models [8.049552839071918]
Large Language Models (LLMs) exhibit significant persuasion capabilities in one-on-one interactions.
This study investigates the potential social impact of LLMs in interconnected users and complex opinion dynamics.
arXiv Detail & Related papers (2024-11-20T10:40:08Z) - Static network structure cannot stabilize cooperation among Large Language Model agents [6.868298200380496]
Large language models (LLMs) are increasingly used to model human social behavior.
This study aims to identify parallels in cooperative behavior between LLMs and humans.
arXiv Detail & Related papers (2024-11-15T15:52:15Z) - From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks [47.13391046553908]
In artificial networks, the effectiveness of these models relies on their ability to build task specific representation.
Prior studies highlight that different initializations can place networks in either a lazy regime, where representations remain static, or a rich/feature learning regime, where representations evolve dynamically.
These solutions capture the evolution of representations and the Neural Kernel across the spectrum from the rich to the lazy regimes.
arXiv Detail & Related papers (2024-09-22T23:19:04Z) - LLMs generate structurally realistic social networks but overestimate political homophily [42.229210482614356]
We develop three prompting methods for network generation and compare the generated networks to real social networks.
We find that more realistic networks are generated with "local" methods, where the LLM constructs relations for one persona at a time.
We also find that the generated networks match real networks on many characteristics, including density, clustering, community structure, and degree.
arXiv Detail & Related papers (2024-08-29T15:36:52Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Reward-Sharing Relational Networks in Multi-Agent Reinforcement Learning
as a Framework for Emergent Behavior [0.0]
We integrate social' interactions into the MARL setup through a user-defined relational network.
We examine the effects of agent-agent relations on the rise of emergent behaviors.
arXiv Detail & Related papers (2022-07-12T23:27:42Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks [71.23327876898816]
Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
arXiv Detail & Related papers (2020-06-07T05:11:18Z) - I Know Where You Are Coming From: On the Impact of Social Media Sources
on AI Model Performance [79.05613148641018]
We will study the performance of different machine learning models when being learned on multi-modal data from different social networks.
Our initial experimental results reveal that social network choice impacts the performance.
arXiv Detail & Related papers (2020-02-05T11:10:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.