Deconstructing Cooperation and Ostracism via Multi-Agent Reinforcement
Learning
- URL: http://arxiv.org/abs/2310.04623v1
- Date: Fri, 6 Oct 2023 23:18:55 GMT
- Title: Deconstructing Cooperation and Ostracism via Multi-Agent Reinforcement
Learning
- Authors: Atsushi Ueshima, Shayegan Omidshafiei, Hirokazu Shirado
- Abstract summary: We show that network rewiring facilitates mutual cooperation even when one agent always offers cooperation.
We also find that ostracism alone is not sufficient to make cooperation emerge.
Our findings provide insights into the conditions and mechanisms necessary for the emergence of cooperation.
- Score: 3.3751859064985483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cooperation is challenging in biological systems, human societies, and
multi-agent systems in general. While a group can benefit when everyone
cooperates, it is tempting for each agent to act selfishly instead. Prior human
studies show that people can overcome such social dilemmas while choosing
interaction partners, i.e., strategic network rewiring. However, little is
known about how agents, including humans, can learn about cooperation from
strategic rewiring and vice versa. Here, we perform multi-agent reinforcement
learning simulations in which two agents play the Prisoner's Dilemma game
iteratively. Each agent has two policies: one controls whether to cooperate or
defect; the other controls whether to rewire connections with another agent.
This setting enables us to disentangle complex causal dynamics between
cooperation and network rewiring. We find that network rewiring facilitates
mutual cooperation even when one agent always offers cooperation, which is
vulnerable to free-riding. We then confirm that the network-rewiring effect is
exerted through agents' learning of ostracism, that is, connecting to
cooperators and disconnecting from defectors. However, we also find that
ostracism alone is not sufficient to make cooperation emerge. Instead,
ostracism emerges from the learning of cooperation, and existing cooperation is
subsequently reinforced due to the presence of ostracism. Our findings provide
insights into the conditions and mechanisms necessary for the emergence of
cooperation with network rewiring.
Related papers
- Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.
We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Intrinsic fluctuations of reinforcement learning promote cooperation [0.0]
Cooperating in social dilemma situations is vital for animals, humans, and machines.
We demonstrate which and how individual elements of the multi-agent learning setting lead to cooperation.
arXiv Detail & Related papers (2022-09-01T09:14:47Z) - The art of compensation: how hybrid teams solve collective risk dilemmas [6.081979963786028]
We study the evolutionary dynamics of cooperation in a hybrid population made of both adaptive and fixed-behavior agents.
We show how the first learn to adapt their behavior to compensate for the behavior of the latter.
arXiv Detail & Related papers (2022-05-13T13:23:42Z) - Exploring the Benefits of Teams in Multiagent Learning [5.334505575267924]
We propose a new model of multiagent teams for reinforcement learning (RL) agents inspired by organizational psychology (OP)
We find that agents divided into teams develop cooperative pro-social policies despite incentives to not cooperate.
Agents are better able to coordinate and learn emergent roles within their teams and achieve higher rewards compared to when the interests of all agents are aligned.
arXiv Detail & Related papers (2022-05-04T21:14:03Z) - Cooperative Artificial Intelligence [0.0]
We argue that there is a need for research on the intersection between game theory and artificial intelligence.
We discuss the problem of how an external agent can promote cooperation between artificial learners.
We show that the resulting cooperative outcome is stable in certain games even if the planning agent is turned off.
arXiv Detail & Related papers (2022-02-20T16:50:37Z) - Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria [57.74495091445414]
Social deduction games offer an avenue to study how individuals might learn to synthesize potentially unreliable information about others.
In this work, we present Hidden Agenda, a two-team social deduction game that provides a 2D environment for studying learning agents in scenarios of unknown team alignment.
Reinforcement learning agents trained in Hidden Agenda show that agents can learn a variety of behaviors, including partnering and voting without need for communication in natural language.
arXiv Detail & Related papers (2022-01-05T20:54:10Z) - Adversarial Attacks in Cooperative AI [0.0]
Single-agent reinforcement learning algorithms in a multi-agent environment are inadequate for fostering cooperation.
Recent work in adversarial machine learning shows that models can be easily deceived into making incorrect decisions.
Cooperative AI might introduce new weaknesses not investigated in previous machine learning research.
arXiv Detail & Related papers (2021-11-29T07:34:12Z) - On Emergent Communication in Competitive Multi-Agent Teams [116.95067289206919]
We investigate whether competition for performance from an external, similar agent team could act as a social influence.
Our results show that an external competitive influence leads to improved accuracy and generalization, as well as faster emergence of communicative languages.
arXiv Detail & Related papers (2020-03-04T01:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.