LLMs generate structurally realistic social networks but overestimate political homophily
- URL: http://arxiv.org/abs/2408.16629v1
- Date: Thu, 29 Aug 2024 15:36:52 GMT
- Title: LLMs generate structurally realistic social networks but overestimate political homophily
- Authors: Serina Chang, Alicja Chaszczewicz, Emma Wang, Maya Josifovska, Emma Pierson, Jure Leskovec,
- Abstract summary: We develop three prompting methods for network generation and compare the generated networks to real social networks.
We find that more realistic networks are generated with "local" methods, where the LLM constructs relations for one persona at a time.
We also find that the generated networks match real networks on many characteristics, including density, clustering, community structure, and degree.
- Score: 42.229210482614356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating social networks is essential for many applications, such as epidemic modeling and social simulations. Prior approaches either involve deep learning models, which require many observed networks for training, or stylized models, which are limited in their realism and flexibility. In contrast, LLMs offer the potential for zero-shot and flexible network generation. However, two key questions are: (1) are LLM's generated networks realistic, and (2) what are risks of bias, given the importance of demographics in forming social ties? To answer these questions, we develop three prompting methods for network generation and compare the generated networks to real social networks. We find that more realistic networks are generated with "local" methods, where the LLM constructs relations for one persona at a time, compared to "global" methods that construct the entire network at once. We also find that the generated networks match real networks on many characteristics, including density, clustering, community structure, and degree. However, we find that LLMs emphasize political homophily over all other types of homophily and overestimate political homophily relative to real-world measures.
Related papers
- Characterizing LLM-driven Social Network: The Chirper.ai Case [24.057352135719555]
Large language models (LLMs) demonstrate the ability to simulate human decision-making processes.
This paper presents a large-scale analysis of Chirper.ai, an X/Twitter-like social network entirely populated by LLM agents.
We examine key differences between LLM agents and humans in posting behaviors, abusive content, and social network structures.
arXiv Detail & Related papers (2025-04-14T14:53:31Z) - DeepSeek-Inspired Exploration of RL-based LLMs and Synergy with Wireless Networks: A Survey [62.697565282841026]
Reinforcement learning (RL)-based large language models (LLMs) have gained significant attention.
Wireless networks require the empowerment of RL-based LLMs.
Wireless networks provide a vital infrastructure for the efficient training, deployment, and distributed inference of RL-based LLMs.
arXiv Detail & Related papers (2025-03-13T01:59:11Z) - Engagement-Driven Content Generation with Large Language Models [8.049552839071918]
Large Language Models (LLMs) exhibit significant persuasion capabilities in one-on-one interactions.
This study investigates the potential social impact of LLMs in interconnected users and complex opinion dynamics.
arXiv Detail & Related papers (2024-11-20T10:40:08Z) - Static network structure cannot stabilize cooperation among Large Language Model agents [6.868298200380496]
Large language models (LLMs) are increasingly used to model human social behavior.
This study aims to identify parallels in cooperative behavior between LLMs and humans.
arXiv Detail & Related papers (2024-11-15T15:52:15Z) - Personalized Wireless Federated Learning for Large Language Models [75.22457544349668]
Large Language Models (LLMs) have revolutionized natural language processing tasks.
Their deployment in wireless networks still face challenges, i.e., a lack of privacy and security protection mechanisms.
We introduce two personalized wireless federated fine-tuning methods with low communication overhead.
arXiv Detail & Related papers (2024-04-20T02:30:21Z) - Can LLMs Understand Computer Networks? Towards a Virtual System Administrator [15.469010487781931]
This paper is the first to conduct an exhaustive study on Large Language Models' comprehension of computer networks.
We evaluate our framework on multiple computer networks employing proprietary (e.g., GPT4) and open-source (e.g., Llama2) models.
arXiv Detail & Related papers (2024-04-19T07:41:54Z) - Network Formation and Dynamics Among Multi-LLMs [5.8418144988203915]
We show that large language models (LLMs) exhibit key social network principles when asked about their preferences in network formation.
We also investigate LLMs' decision-making based on real-world networks, revealing that triadic closure and homophily have a stronger influence than preferential attachment.
arXiv Detail & Related papers (2024-02-16T13:10:14Z) - Emergence of Scale-Free Networks in Social Interactions among Large
Language Models [0.43967817176834806]
We analyze the interactions of multiple generative agents using GPT3.5-turbo as a language model.
We show how renaming agents removes these token priors and allows the model to generate a range of networks from random networks to more realistic scale-free networks.
arXiv Detail & Related papers (2023-12-11T18:43:16Z) - Realistic Synthetic Social Networks with Graph Neural Networks [1.8275108630751837]
We evaluate the potential of Graph Neural Network (GNN) models for network generation for synthetic social networks.
We include social network specific measurements which allow evaluation of how realistically synthetic networks behave.
We find that the Gated Recurrent Attention Network (GRAN) extends well to social networks, and in comparison to a benchmark popular rule-based generation Recursive-MATrix (R-MAT) method, is better able to replicate realistic structural dynamics.
arXiv Detail & Related papers (2022-12-15T14:04:27Z) - Stimulative Training of Residual Networks: A Social Psychology
Perspective of Loafing [86.69698062642055]
Residual networks have shown great success and become indispensable in today's deep models.
We aim to re-investigate the training process of residual networks from a novel social psychology perspective of loafing.
We propose a new training strategy to strengthen the performance of residual networks.
arXiv Detail & Related papers (2022-10-09T03:15:51Z) - Full network nonlocality [68.8204255655161]
We introduce the concept of full network nonlocality, which describes correlations that necessitate all links in a network to distribute nonlocal resources.
We show that the most well-known network Bell test does not witness full network nonlocality.
More generally, we point out that established methods for analysing local and theory-independent correlations in networks can be combined in order to deduce sufficient conditions for full network nonlocality.
arXiv Detail & Related papers (2021-05-19T18:00:02Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - A Multi-Semantic Metapath Model for Large Scale Heterogeneous Network
Representation Learning [52.83948119677194]
We propose a multi-semantic metapath (MSM) model for large scale heterogeneous representation learning.
Specifically, we generate multi-semantic metapath-based random walks to construct the heterogeneous neighborhood to handle the unbalanced distributions.
We conduct systematical evaluations for the proposed framework on two challenging datasets: Amazon and Alibaba.
arXiv Detail & Related papers (2020-07-19T22:50:20Z) - From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks [71.23327876898816]
Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
arXiv Detail & Related papers (2020-06-07T05:11:18Z) - Detecting Communities in Heterogeneous Multi-Relational Networks:A
Message Passing based Approach [89.19237792558687]
Community is a common characteristic of networks including social networks, biological networks, computer and information networks.
We propose an efficient message passing based algorithm to simultaneously detect communities for all homogeneous networks.
arXiv Detail & Related papers (2020-04-06T17:36:24Z) - I Know Where You Are Coming From: On the Impact of Social Media Sources
on AI Model Performance [79.05613148641018]
We will study the performance of different machine learning models when being learned on multi-modal data from different social networks.
Our initial experimental results reveal that social network choice impacts the performance.
arXiv Detail & Related papers (2020-02-05T11:10:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.