Learning to Participate through Trading of Reward Shares
- URL: http://arxiv.org/abs/2301.07416v1
- Date: Wed, 18 Jan 2023 10:25:55 GMT
- Title: Learning to Participate through Trading of Reward Shares
- Authors: Michael K\"olle, Tim Matheis, Philipp Altmann and Kyrill Schmid
- Abstract summary: We propose a method inspired by the stock market, where agents have the opportunity to participate in other agents' returns by acquiring reward shares.
Intuitively, an agent may learn to act according to the common interest when being directly affected by the other agents' rewards.
- Score: 1.5484595752241124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Enabling autonomous agents to act cooperatively is an important step to
integrate artificial intelligence in our daily lives. While some methods seek
to stimulate cooperation by letting agents give rewards to others, in this
paper we propose a method inspired by the stock market, where agents have the
opportunity to participate in other agents' returns by acquiring reward shares.
Intuitively, an agent may learn to act according to the common interest when
being directly affected by the other agents' rewards. The empirical results of
the tested general-sum Markov games show that this mechanism promotes
cooperative policies among independently trained agents in social dilemma
situations. Moreover, as demonstrated in a temporally and spatially extended
domain, participation can lead to the development of roles and the division of
subtasks between the agents.
Related papers
- From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - Learning to Balance Altruism and Self-interest Based on Empathy in Mixed-Motive Games [47.8980880888222]
Multi-agent scenarios often involve mixed motives, demanding altruistic agents capable of self-protection against potential exploitation.
We propose LASE Learning to balance Altruism and Self-interest based on Empathy.
LASE allocates a portion of its rewards to co-players as gifts, with this allocation adapting dynamically based on the social relationship.
arXiv Detail & Related papers (2024-10-10T12:30:56Z) - Reciprocal Reward Influence Encourages Cooperation From Self-Interested Agents [2.1301560294088318]
Cooperation between self-interested individuals is a widespread phenomenon in the natural world, but remains elusive in interactions between artificially intelligent agents.
We introduce Reciprocators, reinforcement learning agents which are intrinsically motivated to reciprocate the influence of opponents' actions on their returns.
We show that Reciprocators can be used to promote cooperation in temporally extended social dilemmas during simultaneous learning.
arXiv Detail & Related papers (2024-06-03T06:07:27Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Mediated Multi-Agent Reinforcement Learning [3.8581550679584473]
We show how a mediator can be trained alongside agents with policy gradient to maximize social welfare.
Our experiments in matrix and iterative games highlight the potential power of applying mediators in Multi-Agent Reinforcement Learning.
arXiv Detail & Related papers (2023-06-14T10:31:37Z) - Learning Reward Machines in Cooperative Multi-Agent Tasks [75.79805204646428]
This paper presents a novel approach to Multi-Agent Reinforcement Learning (MARL)
It combines cooperative task decomposition with the learning of reward machines (RMs) encoding the structure of the sub-tasks.
The proposed method helps deal with the non-Markovian nature of the rewards in partially observable environments.
arXiv Detail & Related papers (2023-03-24T15:12:28Z) - Stochastic Market Games [10.979093424231532]
We propose to utilize market forces to provide incentives for agents to become cooperative.
As demonstrated in an iterated version of the Prisoner's Dilemma, the proposed market formulation can change the dynamics of the game.
We empirically find that the presence of markets can improve both the overall result and agent individual returns via their trading activities.
arXiv Detail & Related papers (2022-07-15T10:37:16Z) - Reliably Re-Acting to Partner's Actions with the Social Intrinsic
Motivation of Transfer Empowerment [40.24079015603578]
We consider multi-agent reinforcement learning (MARL) for cooperative communication and coordination tasks.
MARL agents can be brittle because they can overfit their training partners' policies.
Our objective is to bias the learning process towards finding reactive strategies towards other agents' behaviors.
arXiv Detail & Related papers (2022-03-07T13:03:35Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.