Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent
Deep Reinforcement Learning via Multi-Timescale Learning
- URL: http://arxiv.org/abs/2302.02792v2
- Date: Thu, 17 Aug 2023 19:58:03 GMT
- Title: Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent
Deep Reinforcement Learning via Multi-Timescale Learning
- Authors: Hadi Nekoei, Akilesh Badrinaaraayanan, Amit Sinha, Mohammad Amini,
Janarthanan Rajendran, Aditya Mahajan, Sarath Chandar
- Abstract summary: Decentralized cooperative deep reinforcement learning (MARL) can be a versatile learning framework.
One of the critical challenges in decentralized deep MARL is the non-stationarity of the learning environment when multiple agents are learning concurrently.
We propose a decentralized cooperative MARL algorithm based on multi-timescale learning.
- Score: 15.935860288840466
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Decentralized cooperative multi-agent deep reinforcement learning (MARL) can
be a versatile learning framework, particularly in scenarios where centralized
training is either not possible or not practical. One of the critical
challenges in decentralized deep MARL is the non-stationarity of the learning
environment when multiple agents are learning concurrently. A commonly used and
efficient scheme for decentralized MARL is independent learning in which agents
concurrently update their policies independently of each other. We first show
that independent learning does not always converge, while sequential learning
where agents update their policies one after another in a sequence is
guaranteed to converge to an agent-by-agent optimal solution. In sequential
learning, when one agent updates its policy, all other agent's policies are
kept fixed, alleviating the challenge of non-stationarity due to simultaneous
updates in other agents' policies. However, it can be slow because only one
agent is learning at any time. Therefore it might also not always be practical.
In this work, we propose a decentralized cooperative MARL algorithm based on
multi-timescale learning. In multi-timescale learning, all agents learn
simultaneously, but at different learning rates. In our proposed method, when
one agent updates its policy, other agents are allowed to update their policies
as well, but at a slower rate. This speeds up sequential learning, while also
minimizing non-stationarity caused by other agents updating concurrently.
Multi-timescale learning outperforms state-of-the-art decentralized learning
methods on a set of challenging multi-agent cooperative tasks in the
epymarl(Papoudakis et al., 2020) benchmark. This can be seen as a first step
towards more general decentralized cooperative deep MARL methods based on
multi-timescale learning.
Related papers
- MADiff: Offline Multi-agent Learning with Diffusion Models [79.18130544233794]
Diffusion model (DM) recently achieved huge success in various scenarios including offline reinforcement learning.
We propose MADiff, a novel generative multi-agent learning framework to tackle this problem.
Our experiments show the superior performance of MADiff compared to baseline algorithms in a wide range of multi-agent learning tasks.
arXiv Detail & Related papers (2023-05-27T02:14:09Z) - Learning From Good Trajectories in Offline Multi-Agent Reinforcement
Learning [98.07495732562654]
offline multi-agent reinforcement learning (MARL) aims to learn effective multi-agent policies from pre-collected datasets.
One agent learned by offline MARL often inherits this random policy, jeopardizing the performance of the entire team.
We propose a novel framework called Shared Individual Trajectories (SIT) to address this problem.
arXiv Detail & Related papers (2022-11-28T18:11:26Z) - RPM: Generalizable Behaviors for Multi-Agent Reinforcement Learning [90.43925357575543]
We propose ranked policy memory ( RPM) to collect diverse multi-agent trajectories for training MARL policies with good generalizability.
RPM enables MARL agents to interact with unseen agents in multi-agent generalization evaluation scenarios and complete given tasks, and it significantly boosts the performance up to 402% on average.
arXiv Detail & Related papers (2022-10-18T07:32:43Z) - Asynchronous Actor-Critic for Multi-Agent Reinforcement Learning [19.540926205375857]
Synchronizing decisions across multiple agents in realistic settings is problematic since it requires agents to wait for other agents to terminate and communicate about termination reliably.
We formulate a set of asynchronous multi-agent actor-critic methods that allow agents to directly optimize asynchronous policies in three standard training paradigms.
arXiv Detail & Related papers (2022-09-20T16:36:23Z) - Consensus Learning for Cooperative Multi-Agent Reinforcement Learning [12.74348597962689]
We propose consensus learning for cooperative multi-agent reinforcement learning.
We feed the inferred consensus as an explicit input to the network of agents.
Our proposed method can be extended to various multi-agent reinforcement learning algorithms.
arXiv Detail & Related papers (2022-06-06T12:43:07Z) - Decentralized Cooperative Multi-Agent Reinforcement Learning with
Exploration [35.75029940279768]
We study multi-agent reinforcement learning in the most basic cooperative setting -- Markov teams.
We propose an algorithm in which each agent independently runs a stage-based V-learning style algorithm.
We show that the agents can learn an $epsilon$-approximate Nash equilibrium policy in at most $proptowidetildeO (1/epsilon4)$ episodes.
arXiv Detail & Related papers (2021-10-12T02:45:12Z) - Locality Matters: A Scalable Value Decomposition Approach for
Cooperative Multi-Agent Reinforcement Learning [52.7873574425376]
Cooperative multi-agent reinforcement learning (MARL) faces significant scalability issues due to state and action spaces that are exponentially large in the number of agents.
We propose a novel, value-based multi-agent algorithm called LOMAQ, which incorporates local rewards in the Training Decentralized Execution paradigm.
arXiv Detail & Related papers (2021-09-22T10:08:15Z) - Is Independent Learning All You Need in the StarCraft Multi-Agent
Challenge? [100.48692829396778]
Independent PPO (IPPO) is a form of independent learning in which each agent simply estimates its local value function.
IPPO's strong performance may be due to its robustness to some forms of environment non-stationarity.
arXiv Detail & Related papers (2020-11-18T20:29:59Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z) - Parallel Knowledge Transfer in Multi-Agent Reinforcement Learning [0.2538209532048867]
This paper proposes a novel knowledge transfer framework in MARL, PAT (Parallel Attentional Transfer)
We design two acting modes in PAT, student mode and self-learning mode.
When agents are unfamiliar with the environment, the shared attention mechanism in student mode effectively selects learning knowledge from other agents to decide agents' actions.
arXiv Detail & Related papers (2020-03-29T17:42:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.