Network Formation and Dynamics Among Multi-LLMs
- URL: http://arxiv.org/abs/2402.10659v3
- Date: Sun, 2 Jun 2024 13:50:14 GMT
- Title: Network Formation and Dynamics Among Multi-LLMs
- Authors: Marios Papachristou, Yuan Yuan,
- Abstract summary: We show that large language models (LLMs) exhibit key social network principles when asked about their preferences in network formation.
We also investigate LLMs' decision-making based on real-world networks, revealing that triadic closure and homophily have a stronger influence than preferential attachment.
- Score: 5.8418144988203915
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social networks shape opinions, behaviors, and information dissemination in human societies. As large language models (LLMs) increasingly integrate into social and professional environments, understanding their behavior within the context of social interactions and networks becomes essential. Our study analyzes LLMs' network formation behavior to examine whether the dynamics of multiple LLMs are similar to or different from human social dynamics. We observe that LLMs exhibit key social network principles, including preferential attachment, triadic closure, homophily, community structure, and the small-world phenomenon, when asked about their preferences in network formation. We also investigate LLMs' decision-making based on real-world networks, revealing that triadic closure and homophily have a stronger influence than preferential attachment and that LLMs perform well in network formation predictions. Overall, our study opens up new possibilities for using LLMs in network science research and helps develop socially aware LLMs by shedding light on their social interaction behaviors and exploring their impacts on social dynamics.
Related papers
- LLMs generate structurally realistic social networks but overestimate political homophily [42.229210482614356]
We develop three prompting methods for network generation and compare the generated networks to real social networks.
We find that more realistic networks are generated with "local" methods, where the LLM constructs relations for one persona at a time.
We also find that the generated networks match real networks on many characteristics, including density, clustering, community structure, and degree.
arXiv Detail & Related papers (2024-08-29T15:36:52Z) - Generative AI-in-the-loop: Integrating LLMs and GPTs into the Next Generation Networks [11.509880721677156]
Large language models (LLMs) have recently emerged, demonstrating near-human-level performance in cognitive tasks.
We propose the concept of "generative AI-in-the-loop"
We believe that combining LLMs and ML models allows both to leverage their respective capabilities and achieve better results than either model alone.
arXiv Detail & Related papers (2024-06-06T17:25:07Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Large Language Models for Social Networks: Applications, Challenges, and
Solutions [6.6473450630285225]
Large Language Models (LLMs) are transforming the way people generate, explore, and engage with content.
We study how we can develop LLM applications for online social networks.
arXiv Detail & Related papers (2024-01-04T23:37:48Z) - Do LLM Agents Exhibit Social Behavior? [5.094340963261968]
State-Understanding-Value-Action (SUVA) is a framework to systematically analyze responses in social contexts.
It assesses social behavior through both their final decisions and the response generation processes leading to those decisions.
We demonstrate that utterance-based reasoning reliably predicts LLMs' final actions.
arXiv Detail & Related papers (2023-12-23T08:46:53Z) - Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View [60.80731090755224]
This paper probes the collaboration mechanisms among contemporary NLP systems by practical experiments with theoretical insights.
We fabricate four unique societies' comprised of LLM agents, where each agent is characterized by a specific trait' (easy-going or overconfident) and engages in collaboration with a distinct thinking pattern' (debate or reflection)
Our results further illustrate that LLM agents manifest human-like social behaviors, such as conformity and consensus reaching, mirroring social psychology theories.
arXiv Detail & Related papers (2023-10-03T15:05:52Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z) - Influence of External Information on Large Language Models Mirrors
Social Cognitive Patterns [51.622612759892775]
Social cognitive theory explains how people learn and acquire knowledge through observing others.
Recent years have witnessed the rapid development of large language models (LLMs)
LLMs, as AI agents, can observe external information, which shapes their cognition and behaviors.
arXiv Detail & Related papers (2023-05-08T16:10:18Z) - Social learning spontaneously emerges by searching optimal heuristics
with deep reinforcement learning [0.0]
We employ a deep reinforcement learning model to optimize the social learning strategies of agents in a cooperative game in a multi-dimensional landscape.
We find that the agent spontaneously learns various concepts of social learning, such as copying, focusing on frequent and well-performing neighbors, self-comparison, and the importance of balancing between individual and social learning.
We demonstrate the superior performance of the reinforcement learning agent in various environments, including temporally changing environments and real social networks.
arXiv Detail & Related papers (2022-04-26T15:10:27Z) - I Know Where You Are Coming From: On the Impact of Social Media Sources
on AI Model Performance [79.05613148641018]
We will study the performance of different machine learning models when being learned on multi-modal data from different social networks.
Our initial experimental results reveal that social network choice impacts the performance.
arXiv Detail & Related papers (2020-02-05T11:10:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.