The Importance of Credo in Multiagent Learning
- URL: http://arxiv.org/abs/2204.07471v2
- Date: Wed, 12 Apr 2023 15:04:45 GMT
- Title: The Importance of Credo in Multiagent Learning
- Authors: David Radke, Kate Larson, Tim Brecht
- Abstract summary: We propose a model for multi-objective optimization, a credo, for agents in a system that are configured into multiple groups.
Our results indicate that the interests of teammates, or the entire system, are not required to be fully aligned for achieving globally beneficial outcomes.
- Score: 5.334505575267924
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We propose a model for multi-objective optimization, a credo, for agents in a
system that are configured into multiple groups (i.e., teams). Our model of
credo regulates how agents optimize their behavior for the groups they belong
to. We evaluate credo in the context of challenging social dilemmas with
reinforcement learning agents. Our results indicate that the interests of
teammates, or the entire system, are not required to be fully aligned for
achieving globally beneficial outcomes. We identify two scenarios without full
common interest that achieve high equality and significantly higher mean
population rewards compared to when the interests of all agents are aligned.
Related papers
- AgentVerse: Facilitating Multi-Agent Collaboration and Exploring
Emergent Behaviors [93.38830440346783]
We propose a multi-agent framework framework that can collaboratively adjust its composition as a greater-than-the-sum-of-its-parts system.
Our experiments demonstrate that framework framework can effectively deploy multi-agent groups that outperform a single agent.
In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups.
arXiv Detail & Related papers (2023-08-21T16:47:11Z) - On the Complexity of Multi-Agent Decision Making: From Learning in Games
to Partial Monitoring [105.13668993076801]
A central problem in the theory of multi-agent reinforcement learning (MARL) is to understand what structural conditions and algorithmic principles lead to sample-efficient learning guarantees.
We study this question in a general framework for interactive decision making with multiple agents.
We show that characterizing the statistical complexity for multi-agent decision making is equivalent to characterizing the statistical complexity of single-agent decision making.
arXiv Detail & Related papers (2023-05-01T06:46:22Z) - Learning to Learn Group Alignment: A Self-Tuning Credo Framework with
Multiagent Teams [1.370633147306388]
Mixed incentives among a population with multiagent teams has been shown to have advantages over a fully cooperative system.
We propose a framework where individual learning agents self-regulate their configuration of incentives through various parts of their reward function.
arXiv Detail & Related papers (2023-04-14T18:16:19Z) - Learning From Good Trajectories in Offline Multi-Agent Reinforcement
Learning [98.07495732562654]
offline multi-agent reinforcement learning (MARL) aims to learn effective multi-agent policies from pre-collected datasets.
One agent learned by offline MARL often inherits this random policy, jeopardizing the performance of the entire team.
We propose a novel framework called Shared Individual Trajectories (SIT) to address this problem.
arXiv Detail & Related papers (2022-11-28T18:11:26Z) - Group-Agent Reinforcement Learning [12.915860504511523]
It can largely benefit the reinforcement learning process of each agent if multiple geographically distributed agents perform their separate RL tasks cooperatively.
We propose a distributed RL framework called DDAL (Decentralised Distributed Asynchronous Learning) designed for group-agent reinforcement learning (GARL)
arXiv Detail & Related papers (2022-02-10T16:40:59Z) - Generalization in Cooperative Multi-Agent Systems [49.16349318581611]
We study the theoretical underpinnings of Combinatorial Generalization (CG) for cooperative multi-agent systems.
CG is a highly desirable trait for autonomous systems as it can increase their utility and deployability across a wide range of applications.
arXiv Detail & Related papers (2022-01-31T21:39:56Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.