Do LLM Agents Exhibit Social Behavior?
- URL: http://arxiv.org/abs/2312.15198v2
- Date: Thu, 22 Feb 2024 04:31:26 GMT
- Title: Do LLM Agents Exhibit Social Behavior?
- Authors: Yan Leng, Yuan Yuan
- Abstract summary: This study investigates the extent to which Large Language Models (LLMs) exhibit key social interaction principles.
Our analysis suggests that LLM agents appear to exhibit a range of human-like social behaviors.
LLMs demonstrate a pronounced fairness preference, weaker positive reciprocity, and a more calculating approach in social learning compared to humans.
- Score: 6.018288992619851
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advances of Large Language Models (LLMs) are expanding their utility in
both academic research and practical applications. Recent social science
research has explored the use of these ``black-box'' LLM agents for simulating
complex social systems and potentially substituting human subjects in
experiments. Our study delves into this emerging domain, investigating the
extent to which LLMs exhibit key social interaction principles, such as social
learning, social preference, and cooperative behavior (indirect reciprocity),
in their interactions with humans and other agents. We develop a framework for
our study, wherein classical laboratory experiments involving human subjects
are adapted to use LLM agents. This approach involves step-by-step reasoning
that mirrors human cognitive processes and zero-shot learning to assess the
innate preferences of LLMs. Our analysis of LLM agents' behavior includes both
the primary effects and an in-depth examination of the underlying mechanisms.
Focusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a
range of human-like social behaviors such as distributional and reciprocity
preferences, responsiveness to group identity cues, engagement in indirect
reciprocity, and social learning capabilities. However, our analysis also
reveals notable differences: LLMs demonstrate a pronounced fairness preference,
weaker positive reciprocity, and a more calculating approach in social learning
compared to humans. These insights indicate that while LLMs hold great promise
for applications in social science research, such as in laboratory experiments
and agent-based modeling, the subtle behavioral differences between LLM agents
and humans warrant further investigation. Careful examination and development
of protocols in evaluating the social behaviors of LLMs are necessary before
directly applying these models to emulate human behavior.
Related papers
- PersLLM: A Personified Training Approach for Large Language Models [63.75008885222351]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Exploring Prosocial Irrationality for LLM Agents: A Social Cognition View [21.341128731357415]
Large language models (LLMs) have been shown to face hallucination issues due to the data they trained on often containing human bias.
We propose CogMir, an open-ended Multi-LLM Agents framework that utilizes hallucination properties to assess and enhance LLM Agents' social intelligence.
arXiv Detail & Related papers (2024-05-23T16:13:33Z) - Language Model Evolution: An Iterated Learning Perspective [27.63295869974611]
We draw parallels between the behavior of Large Language Models (LLMs) and the evolution of human culture.
Our approach involves leveraging Iterated Learning (IL), a Bayesian framework that elucidates how subtle biases are magnified during human cultural evolution.
This paper outlines key characteristics of agents' behavior in the Bayesian-IL framework, including predictions that are supported by experimental verification.
arXiv Detail & Related papers (2024-04-04T02:01:25Z) - Wait, It's All Token Noise? Always Has Been: Interpreting LLM Behavior Using Shapley Value [1.223779595809275]
Large language models (LLMs) have opened up exciting possibilities for simulating human behavior and cognitive processes.
However, the validity of utilizing LLMs as stand-ins for human subjects remains uncertain.
This paper presents a novel approach based on Shapley values to interpret LLM behavior and quantify the relative contribution of each prompt component to the model's output.
arXiv Detail & Related papers (2024-03-29T22:49:43Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - LLM-driven Imitation of Subrational Behavior : Illusion or Reality? [3.2365468114603937]
Existing work highlights the ability of Large Language Models to address complex reasoning tasks and mimic human communication.
We propose to investigate the use of LLMs to generate synthetic human demonstrations, which are then used to learn subrational agent policies.
We experimentally evaluate the ability of our framework to model sub-rationality through four simple scenarios.
arXiv Detail & Related papers (2024-02-13T19:46:39Z) - Can Large Language Model Agents Simulate Human Trust Behaviors? [75.69583811834073]
Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in applications such as social science.
In this paper, we focus on one of the most critical behaviors in human interactions, trust, and aim to investigate whether or not LLM agents can simulate human trust behaviors.
arXiv Detail & Related papers (2024-02-07T03:37:19Z) - LLM-Based Agent Society Investigation: Collaboration and Confrontation
in Avalon Gameplay [57.202649879872624]
We present a novel framework designed to seamlessly adapt to Avalon gameplay.
The core of our proposed framework is a multi-agent system that enables efficient communication and interaction among agents.
Our results demonstrate the effectiveness of our framework in generating adaptive and intelligent agents.
arXiv Detail & Related papers (2023-10-23T14:35:26Z) - Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View [60.80731090755224]
This paper probes the collaboration mechanisms among contemporary NLP systems by practical experiments with theoretical insights.
We fabricate four unique societies' comprised of LLM agents, where each agent is characterized by a specific trait' (easy-going or overconfident) and engages in collaboration with a distinct thinking pattern' (debate or reflection)
Our results further illustrate that LLM agents manifest human-like social behaviors, such as conformity and consensus reaching, mirroring social psychology theories.
arXiv Detail & Related papers (2023-10-03T15:05:52Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.