Large Language Models can Achieve Social Balance
- URL: http://arxiv.org/abs/2410.04054v1
- Date: Sat, 5 Oct 2024 06:23:28 GMT
- Title: Large Language Models can Achieve Social Balance
- Authors: Pedro Cisneros-Velarde,
- Abstract summary: Social balance is a concept in sociology which states that if every three individuals in a population achieve certain structures of positive or negative interactions, the whole population ends up in one faction of positive interactions or divided between two or more antagonistic factions.
In this paper, we consider a group of interacting large language models (LLMs) and study how, after continuous interactions, they can achieve social balance.
- Score: 2.8282906214258805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social balance is a concept in sociology which states that if every three individuals in a population achieve certain structures of positive or negative interactions, then the whole population ends up in one faction of positive interactions or divided between two or more antagonistic factions. In this paper, we consider a group of interacting large language models (LLMs) and study how, after continuous interactions, they can achieve social balance. Across three different LLM models, we found that social balance depends on (i) whether interactions are updated based on "relationships", "appraisals", or "opinions"; (ii) whether agents update their interactions based on homophily or influence from their peers; and (iii) the number of simultaneous interactions the LLMs consider. When social balance is achieved, its particular structure of positive or negative interactions depends on these three conditions and are different across LLM models and sizes. The stability of interactions and the justification for their update also vary across models. Thus, social balance is driven by the pre-training and alignment particular to each LLM model.
Related papers
- The Dynamics of Social Conventions in LLM populations: Spontaneous Emergence, Collective Biases and Tipping Points [0.0]
We investigate the dynamics of conventions within populations of Large Language Model (LLM) agents using simulated interactions.
We show that globally accepted social conventions can spontaneously arise from local interactions between communicating LLMs.
Minority groups of committed LLMs can drive social change by establishing new social conventions.
arXiv Detail & Related papers (2024-10-11T16:16:38Z) - SocialGaze: Improving the Integration of Human Social Norms in Large Language Models [28.88929472131529]
We introduce the task of judging social acceptance.
Social acceptance requires models to judge and rationalize the acceptability of people's actions in social situations.
We find that large language models' understanding of social acceptance is often misaligned with human consensus.
arXiv Detail & Related papers (2024-10-11T10:35:58Z) - Breaking Bias, Building Bridges: Evaluation and Mitigation of Social Biases in LLMs via Contact Hypothesis [23.329280888159744]
Large Language Models (LLMs) perpetuate social biases, reflecting prejudices in their training data and reinforcing societal stereotypes and inequalities.
We propose a unique debiasing technique, Social Contact Debiasing (SCD), that instruction-tunes these models with unbiased responses to prompts.
Our research demonstrates that LLM responses exhibit social biases when subject to contact probing, but more importantly, these biases can be significantly reduced by up to 40% in 1 epoch of instruction tuning LLaMA 2 following our SCD strategy.
arXiv Detail & Related papers (2024-07-02T07:58:46Z) - SocialBench: Sociality Evaluation of Role-Playing Conversational Agents [85.6641890712617]
Large language models (LLMs) have advanced the development of various AI conversational agents.
SocialBench is the first benchmark designed to evaluate the sociality of role-playing conversational agents at both individual and group levels.
We find that agents excelling in individual level does not imply their proficiency in group level.
arXiv Detail & Related papers (2024-03-20T15:38:36Z) - Do LLM Agents Exhibit Social Behavior? [5.094340963261968]
State-Understanding-Value-Action (SUVA) is a framework to systematically analyze responses in social contexts.
It assesses social behavior through both their final decisions and the response generation processes leading to those decisions.
We demonstrate that utterance-based reasoning reliably predicts LLMs' final actions.
arXiv Detail & Related papers (2023-12-23T08:46:53Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z) - Flexible social inference facilitates targeted social learning when
rewards are not observable [58.762004496858836]
Groups coordinate more effectively when individuals are able to learn from others' successes.
We suggest that social inference capacities may help bridge this gap, allowing individuals to update their beliefs about others' underlying knowledge and success from observable trajectories of behavior.
arXiv Detail & Related papers (2022-12-01T21:04:03Z) - SSAGCN: Social Soft Attention Graph Convolution Network for Pedestrian
Trajectory Prediction [59.064925464991056]
We propose one new prediction model named Social Soft Attention Graph Convolution Network (SSAGCN)
SSAGCN aims to simultaneously handle social interactions among pedestrians and scene interactions between pedestrians and environments.
Experiments on public available datasets prove the effectiveness of SSAGCN and have achieved state-of-the-art results.
arXiv Detail & Related papers (2021-12-05T01:49:18Z) - PHASE: PHysically-grounded Abstract Social Events for Machine Social
Perception [50.551003004553806]
We create a dataset of physically-grounded abstract social events, PHASE, that resemble a wide range of real-life social interactions.
Phase is validated with human experiments demonstrating that humans perceive rich interactions in the social events.
As a baseline model, we introduce a Bayesian inverse planning approach, SIMPLE, which outperforms state-of-the-art feed-forward neural networks.
arXiv Detail & Related papers (2021-03-02T18:44:57Z) - Interactions in information spread: quantification and interpretation
using stochastic block models [3.5450828190071655]
In social networks, users' behavior results from the people they interact with, news in their feed, or trending topics.
Here, we propose a new model, the Interactive Mixed Membership Block Model (IMMSBM), which investigates the role of interactions between entities.
In inference tasks, taking them into account leads to average relative changes with respect to non-interactive models of up to 150% in the probability of an outcome.
arXiv Detail & Related papers (2020-04-09T14:22:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.