Loss aversion fosters coordination among independent reinforcement
learners
- URL: http://arxiv.org/abs/1912.12633v1
- Date: Sun, 29 Dec 2019 11:22:30 GMT
- Title: Loss aversion fosters coordination among independent reinforcement
learners
- Authors: Marco Jerome Gasparrini, Mart\'i S\'anchez-Fibla
- Abstract summary: We study what are the factors that can accelerate the emergence of collaborative behaviours among independent selfish learning agents.
We model two versions of the game with independent reinforcement learning agents.
We prove experimentally the introduction of loss aversion fosters cooperation by accelerating its appearance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study what are the factors that can accelerate the emergence of
collaborative behaviours among independent selfish learning agents. We depart
from the "Battle of the Exes" (BoE), a spatial repeated game from which human
behavioral data has been obtained (by Hawkings and Goldstone, 2016) that we
find interesting because it considers two cases: a classic game theory version,
called ballistic, in which agents can only make one action/decision (equivalent
to the Battle of the Sexes) and a spatial version, called dynamic, in which
agents can change decision (a spatial continuous version). We model both
versions of the game with independent reinforcement learning agents and we
manipulate the reward function transforming it into an utility introducing
"loss aversion": the reward that an agent obtains can be perceived as less
valuable when compared to what the other got. We prove experimentally the
introduction of loss aversion fosters cooperation by accelerating its
appearance, and by making it possible in some cases like in the dynamic
condition. We suggest that this may be an important factor explaining the rapid
converge of human behaviour towards collaboration reported in the experiment of
Hawkings and Goldstone.
Related papers
- Reciprocal Reward Influence Encourages Cooperation From Self-Interested Agents [2.1301560294088318]
Cooperation between self-interested individuals is a widespread phenomenon in the natural world, but remains elusive in interactions between artificially intelligent agents.
We introduce Reciprocators, reinforcement learning agents which are intrinsically motivated to reciprocate the influence of opponents' actions on their returns.
We show that Reciprocators can be used to promote cooperation in temporally extended social dilemmas during simultaneous learning.
arXiv Detail & Related papers (2024-06-03T06:07:27Z) - Sim-to-Real Causal Transfer: A Metric Learning Approach to
Causally-Aware Interaction Representations [62.48505112245388]
We take an in-depth look at the causal awareness of modern representations of agent interactions.
We show that recent representations are already partially resilient to perturbations of non-causal agents.
We propose a metric learning approach that regularizes latent representations with causal annotations.
arXiv Detail & Related papers (2023-12-07T18:57:03Z) - The Machine Psychology of Cooperation: Can GPT models operationalise prompts for altruism, cooperation, competitiveness and selfishness in economic games? [0.0]
We investigated the capability of the GPT-3.5 large language model (LLM) to operationalize natural language descriptions of cooperative, competitive, altruistic, and self-interested behavior.
We used a prompt to describe the task environment using a similar protocol to that used in experimental psychology studies with human subjects.
Our results provide evidence that LLMs can, to some extent, translate natural language descriptions of different cooperative stances into corresponding descriptions of appropriate task behaviour.
arXiv Detail & Related papers (2023-05-13T17:23:16Z) - Intrinsic fluctuations of reinforcement learning promote cooperation [0.0]
Cooperating in social dilemma situations is vital for animals, humans, and machines.
We demonstrate which and how individual elements of the multi-agent learning setting lead to cooperation.
arXiv Detail & Related papers (2022-09-01T09:14:47Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This work proposes a novel reinforcement learning mechanism based on the social impact of rivalry behavior.
Our proposed model aggregates objective and social perception mechanisms to derive a rivalry score that is used to modulate the learning of artificial agents.
arXiv Detail & Related papers (2022-08-22T14:06:06Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - Moody Learners -- Explaining Competitive Behaviour of Reinforcement
Learning Agents [65.2200847818153]
In a competitive scenario, the agent does not only have a dynamic environment but also is directly affected by the opponents' actions.
Observing the Q-values of the agent is usually a way of explaining its behavior, however, do not show the temporal-relation between the selected actions.
arXiv Detail & Related papers (2020-07-30T11:30:42Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z) - Agent-Based Simulation of Collective Cooperation: From Experiment to
Model [0.0]
We present an experiment to observe what happens when humans pass through a dense static crowd.
We derive a model that incorporates agents' perception and cognitive processing of a situation that needs cooperation.
Agents' ability to successfully get through a dense crowd emerges as an effect of the psychological model.
arXiv Detail & Related papers (2020-05-26T13:29:08Z) - Multi-Issue Bargaining With Deep Reinforcement Learning [0.0]
This paper evaluates the use of deep reinforcement learning in bargaining games.
Two actor-critic networks were trained for the bidding and acceptance strategy.
Neural agents learn to exploit time-based agents, achieving clear transitions in decision preference values.
They also demonstrate adaptive behavior against different combinations of concession, discount factors, and behavior-based strategies.
arXiv Detail & Related papers (2020-02-18T18:33:46Z) - Intrinsic Motivation for Encouraging Synergistic Behavior [55.10275467562764]
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks.
Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.
arXiv Detail & Related papers (2020-02-12T19:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.