Scalable Multi-Agent Model-Based Reinforcement Learning
- URL: http://arxiv.org/abs/2205.15023v1
- Date: Wed, 25 May 2022 08:35:00 GMT
- Title: Scalable Multi-Agent Model-Based Reinforcement Learning
- Authors: Vladimir Egorov and Aleksei Shpilman
- Abstract summary: We propose a new method called MAMBA which utilizes Model-Based Reinforcement Learning (MBRL) to further leverage centralized training in cooperative environments.
We argue that communication between agents is enough to sustain a world model for each agent during execution phase while imaginary rollouts can be used for training, removing the necessity to interact with the environment.
- Score: 1.95804735329484
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent Multi-Agent Reinforcement Learning (MARL) literature has been largely
focused on Centralized Training with Decentralized Execution (CTDE) paradigm.
CTDE has been a dominant approach for both cooperative and mixed environments
due to its capability to efficiently train decentralized policies. While in
mixed environments full autonomy of the agents can be a desirable outcome,
cooperative environments allow agents to share information to facilitate
coordination. Approaches that leverage this technique are usually referred as
communication methods, as full autonomy of agents is compromised for better
performance. Although communication approaches have shown impressive results,
they do not fully leverage this additional information during training phase.
In this paper, we propose a new method called MAMBA which utilizes Model-Based
Reinforcement Learning (MBRL) to further leverage centralized training in
cooperative environments. We argue that communication between agents is enough
to sustain a world model for each agent during execution phase while imaginary
rollouts can be used for training, removing the necessity to interact with the
environment. These properties yield sample efficient algorithm that can scale
gracefully with the number of agents. We empirically confirm that MAMBA
achieves good performance while reducing the number of interactions with the
environment up to an orders of magnitude compared to Model-Free
state-of-the-art approaches in challenging domains of SMAC and Flatland.
Related papers
- Communication-Efficient Training Workload Balancing for Decentralized Multi-Agent Learning [20.683081355473664]
Decentralized Multi-agent Learning (DML) enables collaborative model training while preserving data privacy.
ComDML balances workload among agents through a decentralized approach.
ComDML can significantly reduce the overall training time while maintaining model accuracy, compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-05-01T20:03:37Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Multi-Agent Reinforcement Learning-Based UAV Pathfinding for Obstacle Avoidance in Stochastic Environment [12.122881147337505]
We propose a novel centralized training with decentralized execution method based on multi-agent reinforcement learning.
In our approach, agents communicate only with the centralized planner to make decentralized decisions online.
We conduct multi-step value convergence in multi-agent reinforcement learning to enhance the training efficiency.
arXiv Detail & Related papers (2023-10-25T14:21:22Z) - Learning in Cooperative Multiagent Systems Using Cognitive and Machine
Models [1.0742675209112622]
Multi-Agent Systems (MAS) are critical for many applications requiring collaboration and coordination with humans.
One major challenge is the simultaneous learning and interaction of independent agents in dynamic environments.
We propose three variants of Multi-Agent IBL models (MAIBL)
We demonstrate that the MAIBL models exhibit faster learning and achieve better coordination in a dynamic CMOTP task with various settings of rewards compared to current MADRL models.
arXiv Detail & Related papers (2023-08-18T00:39:06Z) - Centralized Training with Hybrid Execution in Multi-Agent Reinforcement
Learning [7.163485179361718]
We introduce hybrid execution in multi-agent reinforcement learning (MARL)
MARL is a new paradigm in which agents aim to successfully complete cooperative tasks with arbitrary communication levels at execution time.
We contribute MARO, an approach that makes use of an auto-regressive predictive model, trained in a centralized manner, to estimate missing agents' observations.
arXiv Detail & Related papers (2022-10-12T14:58:32Z) - MA-Dreamer: Coordination and communication through shared imagination [5.253168177256072]
We present MA-Dreamer, a model-based method that uses both agent-centric and global differentiable models of the environment.
Our experiments show that in long-term speaker-listener tasks and in cooperative games with strong partial-observability, MA-Dreamer finds a solution that makes effective use of coordination.
arXiv Detail & Related papers (2022-04-10T13:54:26Z) - Locality Matters: A Scalable Value Decomposition Approach for
Cooperative Multi-Agent Reinforcement Learning [52.7873574425376]
Cooperative multi-agent reinforcement learning (MARL) faces significant scalability issues due to state and action spaces that are exponentially large in the number of agents.
We propose a novel, value-based multi-agent algorithm called LOMAQ, which incorporates local rewards in the Training Decentralized Execution paradigm.
arXiv Detail & Related papers (2021-09-22T10:08:15Z) - Is Independent Learning All You Need in the StarCraft Multi-Agent
Challenge? [100.48692829396778]
Independent PPO (IPPO) is a form of independent learning in which each agent simply estimates its local value function.
IPPO's strong performance may be due to its robustness to some forms of environment non-stationarity.
arXiv Detail & Related papers (2020-11-18T20:29:59Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z) - Monotonic Value Function Factorisation for Deep Multi-Agent
Reinforcement Learning [55.20040781688844]
QMIX is a novel value-based method that can train decentralised policies in a centralised end-to-end fashion.
We propose the StarCraft Multi-Agent Challenge (SMAC) as a new benchmark for deep multi-agent reinforcement learning.
arXiv Detail & Related papers (2020-03-19T16:51:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.