Can Large Language Model Agents Simulate Human Trust Behaviors?
- URL: http://arxiv.org/abs/2402.04559v2
- Date: Sun, 10 Mar 2024 13:48:43 GMT
- Title: Can Large Language Model Agents Simulate Human Trust Behaviors?
- Authors: Chengxing Xie, Canyu Chen, Feiran Jia, Ziyu Ye, Kai Shu, Adel Bibi,
Ziniu Hu, Philip Torr, Bernard Ghanem, Guohao Li
- Abstract summary: Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in applications such as social science.
In this paper, we focus on one of the most critical behaviors in human interactions, trust, and aim to investigate whether or not LLM agents can simulate human trust behaviors.
- Score: 75.69583811834073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Model (LLM) agents have been increasingly adopted as
simulation tools to model humans in applications such as social science.
However, one fundamental question remains: can LLM agents really simulate human
behaviors? In this paper, we focus on one of the most critical behaviors in
human interactions, trust, and aim to investigate whether or not LLM agents can
simulate human trust behaviors. We first find that LLM agents generally exhibit
trust behaviors, referred to as agent trust, under the framework of Trust
Games, which are widely recognized in behavioral economics. Then, we discover
that LLM agents can have high behavioral alignment with humans regarding trust
behaviors, particularly for GPT-4, indicating the feasibility to simulate human
trust behaviors with LLM agents. In addition, we probe into the biases in agent
trust and the differences in agent trust towards agents and humans. We also
explore the intrinsic properties of agent trust under conditions including
advanced reasoning strategies and external manipulations. We further offer
important implications of our discoveries for various scenarios where trust is
paramount. Our study provides new insights into the behaviors of LLM agents and
the fundamental analogy between LLMs and humans.
Related papers
- Value-Based Large Language Model Agent Simulation for Mutual Evaluation of Trust and Interpersonal Closeness [3.293744007011733]
Large language models (LLMs) have emerged as powerful tools for simulating complex social phenomena using human-like agents.<n>This study investigates the influence of value similarity on relationship-building among LLM agents through two experiments.
arXiv Detail & Related papers (2025-07-16T07:21:59Z) - A closer look at how large language models trust humans: patterns and biases [0.0]
Large language models (LLMs) and LLM-based agents increasingly interact with humans in decision-making contexts.
LLMs rely on some sort of implicit effective trust in trust-related contexts to assist and affect decision making.
We study whether LLMs trust depends on the three major trustworthiness dimensions: competence, benevolence and integrity of the human subject.
We find that in most, but not all cases, LLM trust is strongly predicted by trustworthiness, and in some cases also biased by age, religion and gender.
arXiv Detail & Related papers (2025-04-22T11:31:50Z) - Do LLMs trust AI regulation? Emerging behaviour of game-theoretic LLM agents [61.132523071109354]
This paper investigates the interplay between AI developers, regulators and users, modelling their strategic choices under different regulatory scenarios.
Our research identifies emerging behaviours of strategic AI agents, which tend to adopt more "pessimistic" stances than pure game-theoretic agents.
arXiv Detail & Related papers (2025-04-11T15:41:21Z) - Measurement of LLM's Philosophies of Human Nature [113.47929131143766]
We design the standardized psychological scale specifically targeting large language models (LLM)
We show that current LLMs exhibit a systemic lack of trust in humans.
We propose a mental loop learning framework, which enables LLM to continuously optimize its value system.
arXiv Detail & Related papers (2025-04-03T06:22:19Z) - A Survey on Trustworthy LLM Agents: Threats and Countermeasures [67.23228612512848]
Large Language Models (LLMs) and Multi-agent Systems (MAS) have significantly expanded the capabilities of LLM ecosystems.
We propose the TrustAgent framework, a comprehensive study on the trustworthiness of agents.
arXiv Detail & Related papers (2025-03-12T08:42:05Z) - Measuring and identifying factors of individuals' trust in Large Language Models [0.0]
Large Language Models (LLMs) can engage in human-looking conversational exchanges.
We introduce the Trust-In-LLMs Index (TILLMI) as a new framework to measure individuals' trust in LLMs.
arXiv Detail & Related papers (2025-02-28T13:16:34Z) - Investigating and Extending Homans' Social Exchange Theory with Large Language Model based Agents [9.430661117447782]
Homans' Social Exchange Theory (SET) is widely recognized as a basic framework for understanding the formation and emergence of human civilizations and social structures.
Recent advances in large language models (LLMs) have shown promising capabilities in simulating human behaviors.
We construct a virtual society composed of three LLM agents and have them engage in a social exchange game to observe their behaviors.
arXiv Detail & Related papers (2025-02-18T02:30:46Z) - Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games [7.504095239018173]
Large Language Model (LLM)-based agents increasingly undertake real-world tasks and engage with human society.
This study investigates how different personas and experimental framings affect these AI agents' altruistic behavior.
Despite being trained on extensive human-generated data, these AI agents cannot accurately predict human decisions.
arXiv Detail & Related papers (2024-10-28T17:47:41Z) - Exploring Prosocial Irrationality for LLM Agents: A Social Cognition View [21.341128731357415]
Large language models (LLMs) have been shown to face hallucination issues due to the data they trained on often containing human bias.
We propose CogMir, an open-ended Multi-LLM Agents framework that utilizes hallucination properties to assess and enhance LLM Agents' social intelligence.
arXiv Detail & Related papers (2024-05-23T16:13:33Z) - LLM-driven Imitation of Subrational Behavior : Illusion or Reality? [3.2365468114603937]
Existing work highlights the ability of Large Language Models to address complex reasoning tasks and mimic human communication.
We propose to investigate the use of LLMs to generate synthetic human demonstrations, which are then used to learn subrational agent policies.
We experimentally evaluate the ability of our framework to model sub-rationality through four simple scenarios.
arXiv Detail & Related papers (2024-02-13T19:46:39Z) - Open Models, Closed Minds? On Agents Capabilities in Mimicking Human Personalities through Open Large Language Models [4.742123770879715]
The work represents a step up in understanding the dense relationship between NLP and human psychology through the lens of Open LLMs.
Our approach involves evaluating the intrinsic personality traits of Open LLM agents and determining the extent to which these agents can mimic human personalities.
arXiv Detail & Related papers (2024-01-13T16:41:40Z) - Towards Machines that Trust: AI Agents Learn to Trust in the Trust Game [11.788352764861369]
We present a theoretical analysis of the $textittrust game$, the canonical task for studying trust in behavioral and brain sciences.
Specifically, leveraging reinforcement learning to train our AI agents, we investigate learning trust under various parameterizations of this task.
Our theoretical analysis, corroborated by the simulations results presented, provides a mathematical basis for the emergence of trust in the trust game.
arXiv Detail & Related papers (2023-12-20T09:32:07Z) - LLM-Based Agent Society Investigation: Collaboration and Confrontation in Avalon Gameplay [55.12945794835791]
Using Avalon as a testbed, we employ system prompts to guide LLM agents in gameplay.
We propose a novel framework, tailored for Avalon, features a multi-agent system facilitating efficient communication and interaction.
Results affirm the framework's effectiveness in creating adaptive agents and suggest LLM-based agents' potential in navigating dynamic social interactions.
arXiv Detail & Related papers (2023-10-23T14:35:26Z) - Character-LLM: A Trainable Agent for Role-Playing [67.35139167985008]
Large language models (LLMs) can be used to serve as agents to simulate human behaviors.
We introduce Character-LLM that teach LLMs to act as specific people such as Beethoven, Queen Cleopatra, Julius Caesar, etc.
arXiv Detail & Related papers (2023-10-16T07:58:56Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally
Inattentive Reinforcement Learning [85.86440477005523]
We study more human-like RL agents which incorporate an established model of human-irrationality, the Rational Inattention (RI) model.
RIRL models the cost of cognitive information processing using mutual information.
We show that using RIRL yields a rich spectrum of new equilibrium behaviors that differ from those found under rational assumptions.
arXiv Detail & Related papers (2022-01-18T20:54:00Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.