The art of compensation: how hybrid teams solve collective risk dilemmas
- URL: http://arxiv.org/abs/2205.06632v1
- Date: Fri, 13 May 2022 13:23:42 GMT
- Title: The art of compensation: how hybrid teams solve collective risk dilemmas
- Authors: In\^es Terrucha, Elias Fern\'andez Domingos, Francisco C. Santos,
Pieter Simoens and Tom Lenaerts
- Abstract summary: We study the evolutionary dynamics of cooperation in a hybrid population made of both adaptive and fixed-behavior agents.
We show how the first learn to adapt their behavior to compensate for the behavior of the latter.
- Score: 6.081979963786028
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is widely known how the human ability to cooperate has influenced the
thriving of our species. However, as we move towards a hybrid human-machine
future, it is still unclear how the introduction of AI agents in our social
interactions will affect this cooperative capacity. Within the context of the
one-shot collective risk dilemma, where enough members of a group must
cooperate in order to avoid a collective disaster, we study the evolutionary
dynamics of cooperation in a hybrid population made of both adaptive and
fixed-behavior agents. Specifically, we show how the first learn to adapt their
behavior to compensate for the behavior of the latter. The less the
(artificially) fixed agents cooperate, the more the adaptive population is
motivated to cooperate, and vice-versa, especially when the risk is higher. By
pinpointing how adaptive agents avoid their share of costly cooperation if the
fixed-behavior agents implement a cooperative policy, our work hints towards an
unbalanced hybrid world. On one hand, this means that introducing cooperative
AI agents within our society might unburden human efforts. Nevertheless, it is
important to note that costless artificial cooperation might not be realistic,
and more than deploying AI systems that carry the cooperative effort, we must
focus on mechanisms that nudge shared cooperation among all members in the
hybrid system.
Related papers
- Learning to Balance Altruism and Self-interest Based on Empathy in Mixed-Motive Games [47.8980880888222]
Multi-agent scenarios often involve mixed motives, demanding altruistic agents capable of self-protection against potential exploitation.
We propose LASE Learning to balance Altruism and Self-interest based on Empathy.
LASE allocates a portion of its rewards to co-players as gifts, with this allocation adapting dynamically based on the social relationship.
arXiv Detail & Related papers (2024-10-10T12:30:56Z) - Emergent Cooperation under Uncertain Incentive Alignment [7.906156032228933]
We study how cooperation can arise among reinforcement learning agents in scenarios characterised by infrequent encounters.
We study the effects of mechanisms, such as reputation and intrinsic rewards, that have been proposed in the literature to foster cooperation in mixed-motives environments.
arXiv Detail & Related papers (2024-01-23T10:55:54Z) - Deconstructing Cooperation and Ostracism via Multi-Agent Reinforcement
Learning [3.3751859064985483]
We show that network rewiring facilitates mutual cooperation even when one agent always offers cooperation.
We also find that ostracism alone is not sufficient to make cooperation emerge.
Our findings provide insights into the conditions and mechanisms necessary for the emergence of cooperation.
arXiv Detail & Related papers (2023-10-06T23:18:55Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Investigating the Impact of Direct Punishment on the Emergence of Cooperation in Multi-Agent Reinforcement Learning Systems [2.4555276449137042]
Problems of cooperation are omnipresent within human society.
As the use of AI becomes more pervasive throughout society, the need for socially intelligent agents is becoming increasingly evident.
This paper presents a comprehensive analysis and evaluation of the behaviors and learning dynamics associated with direct punishment, third-party punishment, partner selection, and reputation.
arXiv Detail & Related papers (2023-01-19T19:33:54Z) - PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI
Coordination [52.991211077362586]
We propose a policy ensemble method to increase the diversity of partners in the population.
We then develop a context-aware method enabling the ego agent to analyze and identify the partner's potential policy primitives.
In this way, the ego agent is able to learn more universal cooperative behaviors for collaborating with diverse partners.
arXiv Detail & Related papers (2023-01-16T12:14:58Z) - Learning Collective Action under Risk Diversity [68.88688248278102]
We investigate the consequences of risk diversity in groups of agents learning to play collective risk dilemmas.
We show that risk diversity significantly reduces overall cooperation and hinders collective target achievement.
Our results highlight the need for aligning risk perceptions among agents or develop new learning techniques.
arXiv Detail & Related papers (2022-01-30T18:21:21Z) - Normative Disagreement as a Challenge for Cooperative AI [56.34005280792013]
We argue that typical cooperation-inducing learning algorithms fail to cooperate in bargaining problems.
We develop a class of norm-adaptive policies and show in experiments that these significantly increase cooperation.
arXiv Detail & Related papers (2021-11-27T11:37:42Z) - Cooperation and Reputation Dynamics with Reinforcement Learning [6.219565750197311]
We show how reputations can be used as a way to establish trust and cooperation.
We propose two mechanisms to alleviate convergence to undesirable equilibria.
We show how our results relate to the literature in Evolutionary Game Theory.
arXiv Detail & Related papers (2021-02-15T12:48:56Z) - Open Problems in Cooperative AI [21.303564222227727]
Research aims to study the many aspects of the problems of cooperation and to innovate in AI to contribute to solving these problems.
This research integrates ongoing work on multi-agent systems, game theory and social choice, human-machine interaction and alignment, natural-language processing, and the construction of social tools and platforms.
arXiv Detail & Related papers (2020-12-15T21:39:50Z) - Cooperative Inverse Reinforcement Learning [64.60722062217417]
We propose a formal definition of the value alignment problem as cooperative reinforcement learning (CIRL)
A CIRL problem is a cooperative, partial-information game with two agents human and robot; both are rewarded according to the human's reward function, but the robot does not initially know what this is.
In contrast to classical IRL, where the human is assumed to act optimally in isolation, optimal CIRL solutions produce behaviors such as active teaching, active learning, and communicative actions.
arXiv Detail & Related papers (2016-06-09T22:39:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.