Generalization in Cooperative Multi-Agent Systems
- URL: http://arxiv.org/abs/2202.00104v1
- Date: Mon, 31 Jan 2022 21:39:56 GMT
- Title: Generalization in Cooperative Multi-Agent Systems
- Authors: Anuj Mahajan, Mikayel Samvelyan, Tarun Gupta, Benjamin Ellis, Mingfei
Sun, Tim Rockt\"aschel, Shimon Whiteson
- Abstract summary: We study the theoretical underpinnings of Combinatorial Generalization (CG) for cooperative multi-agent systems.
CG is a highly desirable trait for autonomous systems as it can increase their utility and deployability across a wide range of applications.
- Score: 49.16349318581611
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Collective intelligence is a fundamental trait shared by several species of
living organisms. It has allowed them to thrive in the diverse environmental
conditions that exist on our planet. From simple organisations in an ant colony
to complex systems in human groups, collective intelligence is vital for
solving complex survival tasks. As is commonly observed, such natural systems
are flexible to changes in their structure. Specifically, they exhibit a high
degree of generalization when the abilities or the total number of agents
changes within a system. We term this phenomenon as Combinatorial
Generalization (CG). CG is a highly desirable trait for autonomous systems as
it can increase their utility and deployability across a wide range of
applications. While recent works addressing specific aspects of CG have shown
impressive results on complex domains, they provide no performance guarantees
when generalizing towards novel situations. In this work, we shed light on the
theoretical underpinnings of CG for cooperative multi-agent systems (MAS).
Specifically, we study generalization bounds under a linear dependence of the
underlying dynamics on the agent capabilities, which can be seen as a
generalization of Successor Features to MAS. We then extend the results first
for Lipschitz and then arbitrary dependence of rewards on team capabilities.
Finally, empirical analysis on various domains using the framework of
multi-agent reinforcement learning highlights important desiderata for
multi-agent algorithms towards ensuring CG.
Related papers
- Enhancing Heterogeneous Multi-Agent Cooperation in Decentralized MARL via GNN-driven Intrinsic Rewards [1.179778723980276]
Multi-agent Reinforcement Learning (MARL) is emerging as a key framework for sequential decision-making and control tasks.
The deployment of these systems in real-world scenarios often requires decentralized training, a diverse set of agents, and learning from infrequent environmental reward signals.
We propose the CoHet algorithm, which utilizes a novel Graph Neural Network (GNN) based intrinsic motivation to facilitate the learning of heterogeneous agent policies.
arXiv Detail & Related papers (2024-08-12T21:38:40Z) - Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - Linear Convergence of Independent Natural Policy Gradient in Games with Entropy Regularization [12.612009339150504]
This work focuses on the entropy-regularized independent natural policy gradient (NPG) algorithm in multi-agent reinforcement learning.
We show that, under sufficient entropy regularization, the dynamics of this system converge at a linear rate to the quantal response equilibrium (QRE)
arXiv Detail & Related papers (2024-05-04T22:48:53Z) - SocialGFs: Learning Social Gradient Fields for Multi-Agent Reinforcement Learning [58.84311336011451]
We propose a novel gradient-based state representation for multi-agent reinforcement learning.
We employ denoising score matching to learn the social gradient fields (SocialGFs) from offline samples.
In practice, we integrate SocialGFs into the widely used multi-agent reinforcement learning algorithms, e.g., MAPPO.
arXiv Detail & Related papers (2024-05-03T04:12:19Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Learning to Learn Group Alignment: A Self-Tuning Credo Framework with
Multiagent Teams [1.370633147306388]
Mixed incentives among a population with multiagent teams has been shown to have advantages over a fully cooperative system.
We propose a framework where individual learning agents self-regulate their configuration of incentives through various parts of their reward function.
arXiv Detail & Related papers (2023-04-14T18:16:19Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z) - Individual specialization in multi-task environments with multiagent
reinforcement learners [0.0]
There is a growing interest in Multi-Agent Reinforcement Learning (MARL) as the first steps towards building general intelligent agents.
Previous results point us towards increased conditions for coordination, efficiency/fairness, and common-pool resource sharing.
We further study coordination in multi-task environments where several rewarding tasks can be performed and thus agents don't necessarily need to perform well in all tasks, but under certain conditions may specialize.
arXiv Detail & Related papers (2019-12-29T15:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.