Centralized Model and Exploration Policy for Multi-Agent RL
- URL: http://arxiv.org/abs/2107.06434v1
- Date: Wed, 14 Jul 2021 00:34:08 GMT
- Title: Centralized Model and Exploration Policy for Multi-Agent RL
- Authors: Qizhen Zhang, Chris Lu, Animesh Garg, Jakob Foerster
- Abstract summary: Reinforcement learning in partially observable, fully cooperative multi-agent settings (Dec-POMDPs) can be used to address many real-world challenges.
Current RL algorithms for Dec-POMDPs suffer from poor sample complexity.
We propose a model-based algorithm, MARCO, in three cooperative communication tasks, where it improves sample efficiency by up to 20x.
- Score: 13.661446184763117
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning (RL) in partially observable, fully cooperative
multi-agent settings (Dec-POMDPs) can in principle be used to address many
real-world challenges such as controlling a swarm of rescue robots or a
synchronous team of quadcopters. However, Dec-POMDPs are significantly harder
to solve than single-agent problems, with the former being NEXP-complete and
the latter, MDPs, being just P-complete. Hence, current RL algorithms for
Dec-POMDPs suffer from poor sample complexity, thereby reducing their
applicability to practical problems where environment interaction is costly.
Our key insight is that using just a polynomial number of samples, one can
learn a centralized model that generalizes across different policies. We can
then optimize the policy within the learned model instead of the true system,
reducing the number of environment interactions. We also learn a centralized
exploration policy within our model that learns to collect additional data in
state-action regions with high model uncertainty. Finally, we empirically
evaluate the proposed model-based algorithm, MARCO, in three cooperative
communication tasks, where it improves sample efficiency by up to 20x.
Related papers
- Decentralized Transformers with Centralized Aggregation are Sample-Efficient Multi-Agent World Models [106.94827590977337]
We propose a novel world model for Multi-Agent RL (MARL) that learns decentralized local dynamics for scalability.
We also introduce a Perceiver Transformer as an effective solution to enable centralized representation aggregation.
Results on Starcraft Multi-Agent Challenge (SMAC) show that it outperforms strong model-free approaches and existing model-based methods in both sample efficiency and overall performance.
arXiv Detail & Related papers (2024-06-22T12:40:03Z) - Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL [57.745700271150454]
We study the sample complexity of reinforcement learning in Mean-Field Games (MFGs) with model-based function approximation.
We introduce the Partial Model-Based Eluder Dimension (P-MBED), a more effective notion to characterize the model class complexity.
arXiv Detail & Related papers (2024-02-08T14:54:47Z) - Learning RL-Policies for Joint Beamforming Without Exploration: A Batch
Constrained Off-Policy Approach [1.0080317855851213]
We consider the problem of network parameter cancellation optimization for networks.
We show that deploying an algorithm in the real world for exploration and learning can be achieved with the data without exploring.
arXiv Detail & Related papers (2023-10-12T18:36:36Z) - Pretty darn good control: when are approximate solutions better than
approximate models [0.0]
We show that DRL algorithms can successfully approximate solutions in a non-linear three-variable model for a fishery.
We show that the policy obtained with DRL is both more profitable and more sustainable than any constant mortality policy.
arXiv Detail & Related papers (2023-08-25T19:58:17Z) - Partially Observable Multi-Agent Reinforcement Learning with Information Sharing [33.145861021414184]
We study provable multi-agent reinforcement learning (RL) in the general framework of partially observable games (POSGs)
We advocate leveraging the potential emph information-sharing among agents, a common practice in empirical multi-agent RL, and a standard model for multi-agent control systems with communications.
arXiv Detail & Related papers (2023-08-16T23:42:03Z) - Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration [87.53543137162488]
We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
arXiv Detail & Related papers (2023-05-29T17:25:26Z) - Monte-Carlo Search for an Equilibrium in Dec-POMDPs [11.726372393432195]
Decentralized partially observable Markov decision processes (Dec-POMDPs) formalize the problem of individual controllers for a group of collaborative agents.
seeking a Nash equilibrium -- each agent policy being a best response to the other agents -- is more accessible.
We show that this approach can be adapted to cases where only a generative model (a simulator) of the Dec-POMDP is available.
arXiv Detail & Related papers (2023-05-19T16:47:46Z) - Factorization of Multi-Agent Sampling-Based Motion Planning [72.42734061131569]
Modern robotics often involves multiple embodied agents operating within a shared environment.
Standard sampling-based algorithms can be used to search for solutions in the robots' joint space.
We integrate the concept of factorization into sampling-based algorithms, which requires only minimal modifications to existing methods.
We present a general implementation of a factorized SBA, derive an analytical gain in terms of sample complexity for PRM*, and showcase empirical results for RRG.
arXiv Detail & Related papers (2023-04-01T15:50:18Z) - Fully Decentralized Model-based Policy Optimization for Networked
Systems [23.46407780093797]
This work aims to improve data efficiency of multi-agent control by model-based learning.
We consider networked systems where agents are cooperative and communicate only locally with their neighbors.
In our method, each agent learns a dynamic model to predict future states and broadcast their predictions by communication, and then the policies are trained under the model rollouts.
arXiv Detail & Related papers (2022-07-13T23:52:14Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.