Transferable Multi-Agent Reinforcement Learning with Dynamic
Participating Agents
- URL: http://arxiv.org/abs/2208.02424v1
- Date: Thu, 4 Aug 2022 03:16:42 GMT
- Title: Transferable Multi-Agent Reinforcement Learning with Dynamic
Participating Agents
- Authors: Xuting Tang, Jia Xu, Shusen Wang
- Abstract summary: We propose a network architecture with a few-shot learning algorithm that allows the number of agents to vary during centralized training.
Our experiments show that using the proposed network architecture and algorithm, model adaptation when new agents join can be 100+ times faster than the baseline.
- Score: 19.52531351740528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study multi-agent reinforcement learning (MARL) with centralized training
and decentralized execution. During the training, new agents may join, and
existing agents may unexpectedly leave the training. In such situations, a
standard deep MARL model must be trained again from scratch, which is very
time-consuming. To tackle this problem, we propose a special network
architecture with a few-shot learning algorithm that allows the number of
agents to vary during centralized training. In particular, when a new agent
joins the centralized training, our few-shot learning algorithm trains its
policy network and value network using a small number of samples; when an agent
leaves the training, the training process of the remaining agents is not
affected. Our experiments show that using the proposed network architecture and
algorithm, model adaptation when new agents join can be 100+ times faster than
the baseline. Our work is applicable to any setting, including cooperative,
competitive, and mixed.
Related papers
- Communication-Efficient Training Workload Balancing for Decentralized Multi-Agent Learning [20.683081355473664]
Decentralized Multi-agent Learning (DML) enables collaborative model training while preserving data privacy.
ComDML balances workload among agents through a decentralized approach.
ComDML can significantly reduce the overall training time while maintaining model accuracy, compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-05-01T20:03:37Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - MADiff: Offline Multi-agent Learning with Diffusion Models [79.18130544233794]
Diffusion model (DM) recently achieved huge success in various scenarios including offline reinforcement learning.
We propose MADiff, a novel generative multi-agent learning framework to tackle this problem.
Our experiments show the superior performance of MADiff compared to baseline algorithms in a wide range of multi-agent learning tasks.
arXiv Detail & Related papers (2023-05-27T02:14:09Z) - Group-Agent Reinforcement Learning [12.915860504511523]
It can largely benefit the reinforcement learning process of each agent if multiple geographically distributed agents perform their separate RL tasks cooperatively.
We propose a distributed RL framework called DDAL (Decentralised Distributed Asynchronous Learning) designed for group-agent reinforcement learning (GARL)
arXiv Detail & Related papers (2022-02-10T16:40:59Z) - K-nearest Multi-agent Deep Reinforcement Learning for Collaborative
Tasks with a Variable Number of Agents [13.110291070230815]
We propose a new deep reinforcement learning algorithm for multi-agent collaborative tasks with a variable number of agents.
We demonstrate the application of our algorithm using a fleet management simulator developed by Hitachi to generate realistic scenarios in a production site.
arXiv Detail & Related papers (2022-01-18T16:14:24Z) - Evaluating Generalization and Transfer Capacity of Multi-Agent
Reinforcement Learning Across Variable Number of Agents [0.0]
Multi-agent Reinforcement Learning (MARL) problems often require cooperation among agents in order to solve a task.
Centralization and decentralization are two approaches used for cooperation in MARL.
We adopt centralized training with decentralized execution paradigm and investigate the generalization and transfer capacity of the trained models across variable number of agents.
arXiv Detail & Related papers (2021-11-28T15:29:46Z) - Mean-Field Multi-Agent Reinforcement Learning: A Decentralized Network
Approach [6.802025156985356]
This paper proposes a framework called localized training and decentralized execution to study MARL with network of states.
The key idea is to utilize the homogeneity of agents and regroup them according to their states, thus the formulation of a networked Markov decision process.
arXiv Detail & Related papers (2021-08-05T16:52:36Z) - MALib: A Parallel Framework for Population-based Multi-agent
Reinforcement Learning [61.28547338576706]
Population-based multi-agent reinforcement learning (PB-MARL) refers to the series of methods nested with reinforcement learning (RL) algorithms.
We present MALib, a scalable and efficient computing framework for PB-MARL.
arXiv Detail & Related papers (2021-06-05T03:27:08Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z) - Scalable Multi-Agent Inverse Reinforcement Learning via
Actor-Attention-Critic [54.2180984002807]
Multi-agent adversarial inverse reinforcement learning (MA-AIRL) is a recent approach that applies single-agent AIRL to multi-agent problems.
We propose a multi-agent inverse RL algorithm that is more sample-efficient and scalable than previous works.
arXiv Detail & Related papers (2020-02-24T20:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.