FairMindSim: Alignment of Behavior, Emotion, and Belief in Humans and LLM Agents Amid Ethical Dilemmas
- URL: http://arxiv.org/abs/2410.10398v2
- Date: Thu, 17 Oct 2024 15:02:31 GMT
- Title: FairMindSim: Alignment of Behavior, Emotion, and Belief in Humans and LLM Agents Amid Ethical Dilemmas
- Authors: Yu Lei, Hao Liu, Chengxing Xie, Songjia Liu, Zhiyu Yin, Canyu Chen, Guohao Li, Philip Torr, Zhen Wu,
- Abstract summary: We introduced FairMindSim, which simulates the moral dilemma through a series of unfair scenarios.
We used LLM agents to simulate human behavior, ensuring alignment across various stages.
Our findings indicate that, behaviorally, GPT-4o exhibits a stronger sense of social justice, while humans display a richer range of emotions.
- Score: 23.26678104324838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI alignment is a pivotal issue concerning AI control and safety. It should consider not only value-neutral human preferences but also moral and ethical considerations. In this study, we introduced FairMindSim, which simulates the moral dilemma through a series of unfair scenarios. We used LLM agents to simulate human behavior, ensuring alignment across various stages. To explore the various socioeconomic motivations, which we refer to as beliefs, that drive both humans and LLM agents as bystanders to intervene in unjust situations involving others, and how these beliefs interact to influence individual behavior, we incorporated knowledge from relevant sociological fields and proposed the Belief-Reward Alignment Behavior Evolution Model (BREM) based on the recursive reward model (RRM). Our findings indicate that, behaviorally, GPT-4o exhibits a stronger sense of social justice, while humans display a richer range of emotions. Additionally, we discussed the potential impact of emotions on behavior. This study provides a theoretical foundation for applications in aligning LLMs with altruistic values.
Related papers
- Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games [7.504095239018173]
Large Language Model (LLM)-based agents increasingly undertake real-world tasks and engage with human society.
This study investigates how different personas and experimental framings affect these AI agents' altruistic behavior.
Despite being trained on extensive human-generated data, these AI agents cannot accurately predict human decisions.
arXiv Detail & Related papers (2024-10-28T17:47:41Z) - DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life [46.11149958010897]
We present DailyDilemmas, a dataset of 1,360 moral dilemmas encountered in everyday life.
Each dilemma includes two possible actions and with each action, the affected parties and human values invoked.
We analyzed these values through the lens of five popular theories inspired by sociology, psychology and philosophy.
arXiv Detail & Related papers (2024-10-03T17:08:52Z) - The Good, the Bad, and the Hulk-like GPT: Analyzing Emotional Decisions of Large Language Models in Cooperation and Bargaining Games [9.82711167146543]
We introduce a novel methodology to study the decision-making of Large Language Models (LLMs)
We show that emotions profoundly impact the performance of LLMs, leading to the development of more optimal strategies.
Surprisingly, emotional prompting, particularly with anger' emotion, can disrupt the "superhuman" alignment of GPT-4.
arXiv Detail & Related papers (2024-06-05T14:08:54Z) - Can Large Language Model Agents Simulate Human Trust Behavior? [81.45930976132203]
We investigate whether Large Language Model (LLM) agents can simulate human trust behavior.
GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior.
We also probe the biases of agent trust and differences in agent trust towards other LLM agents and humans.
arXiv Detail & Related papers (2024-02-07T03:37:19Z) - Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect [0.12041807591122715]
We propose an interactional approach to ethics that is centred on relational and situational factors.
Our work anticipates a set of largely unexplored risks at the level of situated social interaction.
arXiv Detail & Related papers (2024-01-17T09:44:03Z) - Learning Human-like Representations to Enable Learning Human Values [12.628307026004656]
We argue that representational alignment between humans and AI agents facilitates value alignment.
We focus on ethics as one aspect of value alignment and train ML agents using a variety of methods.
arXiv Detail & Related papers (2023-12-21T18:31:33Z) - MoCa: Measuring Human-Language Model Alignment on Causal and Moral
Judgment Tasks [49.60689355674541]
A rich literature in cognitive science has studied people's causal and moral intuitions.
This work has revealed a number of factors that systematically influence people's judgments.
We test whether large language models (LLMs) make causal and moral judgments about text-based scenarios that align with human participants.
arXiv Detail & Related papers (2023-10-30T15:57:32Z) - Heterogeneous Value Alignment Evaluation for Large Language Models [91.96728871418]
Large Language Models (LLMs) have made it crucial to align their values with those of humans.
We propose a Heterogeneous Value Alignment Evaluation (HVAE) system to assess the success of aligning LLMs with heterogeneous values.
arXiv Detail & Related papers (2023-05-26T02:34:20Z) - Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards
and Ethical Behavior in the MACHIAVELLI Benchmark [61.43264961005614]
We develop a benchmark of 134 Choose-Your-Own-Adventure games containing over half a million rich, diverse scenarios.
We evaluate agents' tendencies to be power-seeking, cause disutility, and commit ethical violations.
Our results show that agents can both act competently and morally, so concrete progress can be made in machine ethics.
arXiv Detail & Related papers (2023-04-06T17:59:03Z) - Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement
Learning [4.2050490361120465]
A bottom-up learning approach may be more appropriate for studying and developing ethical behavior in AI agents.
We present a systematic analysis of the choices made by intrinsically-motivated RL agents whose rewards are based on moral theories.
We analyze the impact of different types of morality on the emergence of cooperation, defection or exploitation.
arXiv Detail & Related papers (2023-01-20T09:36:42Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.