Stubborn: An Environment for Evaluating Stubbornness between Agents with
Aligned Incentives
- URL: http://arxiv.org/abs/2304.12280v2
- Date: Fri, 28 Apr 2023 16:21:35 GMT
- Title: Stubborn: An Environment for Evaluating Stubbornness between Agents with
Aligned Incentives
- Authors: Ram Rachum, Yonatan Nakar, Reuth Mirsky
- Abstract summary: We present Stubborn, an environment for evaluating stubbornness between agents with fully-aligned incentives.
In our preliminary results, the agents learn to use their partner's stubbornness as a signal for improving the choices that they make in the environment.
- Score: 4.022057598291766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research in multi-agent reinforcement learning (MARL) has shown
success in learning social behavior and cooperation. Social dilemmas between
agents in mixed-sum settings have been studied extensively, but there is little
research into social dilemmas in fullycooperative settings, where agents have
no prospect of gaining reward at another agent's expense.
While fully-aligned interests are conducive to cooperation between agents,
they do not guarantee it. We propose a measure of "stubbornness" between agents
that aims to capture the human social behavior from which it takes its name: a
disagreement that is gradually escalating and potentially disastrous. We would
like to promote research into the tendency of agents to be stubborn, the
reactions of counterpart agents, and the resulting social dynamics.
In this paper we present Stubborn, an environment for evaluating stubbornness
between agents with fully-aligned incentives. In our preliminary results, the
agents learn to use their partner's stubbornness as a signal for improving the
choices that they make in the environment.
Related papers
- Can Agents Spontaneously Form a Society? Introducing a Novel Architecture for Generative Multi-Agents to Elicit Social Emergence [0.11249583407496219]
We introduce a generative agent architecture called ITCMA-S, which includes a basic framework for individual agents and a framework that supports social interactions among multi-agents.
This architecture enables agents to identify and filter out behaviors that are detrimental to social interactions, guiding them to choose more favorable actions.
arXiv Detail & Related papers (2024-09-10T13:39:29Z) - Dynamics of Moral Behavior in Heterogeneous Populations of Learning Agents [3.7414804164475983]
We study the learning dynamics of morally heterogeneous populations interacting in a social dilemma setting.
We observe several types of non-trivial interactions between pro-social and anti-social agents.
We find that certain types of moral agents are able to steer selfish agents towards more cooperative behavior.
arXiv Detail & Related papers (2024-03-07T04:12:24Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - Situation-Dependent Causal Influence-Based Cooperative Multi-agent
Reinforcement Learning [18.054709749075194]
We propose a novel MARL algorithm named Situation-Dependent Causal Influence-Based Cooperative Multi-agent Reinforcement Learning (SCIC)
Our approach aims to detect inter-agent causal influences in specific situations based on the criterion using causal intervention and conditional mutual information.
The resulting update links coordinated exploration and intrinsic reward distribution, which enhance overall collaboration and performance.
arXiv Detail & Related papers (2023-12-15T05:09:32Z) - DCIR: Dynamic Consistency Intrinsic Reward for Multi-Agent Reinforcement
Learning [84.22561239481901]
We propose a new approach that enables agents to learn whether their behaviors should be consistent with that of other agents.
We evaluate DCIR in multiple environments including Multi-agent Particle, Google Research Football and StarCraft II Micromanagement.
arXiv Detail & Related papers (2023-12-10T06:03:57Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Mediated Multi-Agent Reinforcement Learning [3.8581550679584473]
We show how a mediator can be trained alongside agents with policy gradient to maximize social welfare.
Our experiments in matrix and iterative games highlight the potential power of applying mediators in Multi-Agent Reinforcement Learning.
arXiv Detail & Related papers (2023-06-14T10:31:37Z) - Aligning to Social Norms and Values in Interactive Narratives [89.82264844526333]
We focus on creating agents that act in alignment with socially beneficial norms and values in interactive narratives or text-based games.
We introduce the GALAD agent that uses the social commonsense knowledge present in specially trained language models to contextually restrict its action space to only those actions that are aligned with socially beneficial values.
arXiv Detail & Related papers (2022-05-04T09:54:33Z) - A mechanism of Individualistic Indirect Reciprocity with internal and
external dynamics [0.0]
This research proposes a new variant of Nowak and Sigmund model, focused on agents' attitude.
Using Agent-Based Model and a Data Science method, we show on simulation results that the discriminatory stance of the agents prevails in most cases.
The results also show that when the reputation of others is unknown, with a high obstinacy and high cooperation demand, a heterogeneous society is obtained.
arXiv Detail & Related papers (2021-05-28T23:28:50Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.