Inverse Factorized Q-Learning for Cooperative Multi-agent Imitation
Learning
- URL: http://arxiv.org/abs/2310.06801v1
- Date: Tue, 10 Oct 2023 17:11:20 GMT
- Title: Inverse Factorized Q-Learning for Cooperative Multi-agent Imitation
Learning
- Authors: The Viet Bui and Tien Mai and Thanh Hong Nguyen
- Abstract summary: imitation learning (IL) is a problem of learning to mimic expert behaviors from demonstrations in cooperative multi-agent systems.
We introduce a novel multi-agent IL algorithm designed to address these challenges.
Our approach enables the centralized learning by leveraging mixing networks to aggregate decentralized Q functions.
- Score: 13.060023718506917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper concerns imitation learning (IL) (i.e, the problem of learning to
mimic expert behaviors from demonstrations) in cooperative multi-agent systems.
The learning problem under consideration poses several challenges,
characterized by high-dimensional state and action spaces and intricate
inter-agent dependencies. In a single-agent setting, IL has proven to be done
efficiently through an inverse soft-Q learning process given expert
demonstrations. However, extending this framework to a multi-agent context
introduces the need to simultaneously learn both local value functions to
capture local observations and individual actions, and a joint value function
for exploiting centralized learning. In this work, we introduce a novel
multi-agent IL algorithm designed to address these challenges. Our approach
enables the centralized learning by leveraging mixing networks to aggregate
decentralized Q functions. A main advantage of this approach is that the
weights of the mixing networks can be trained using information derived from
global states. We further establish conditions for the mixing networks under
which the multi-agent objective function exhibits convexity within the Q
function space. We present extensive experiments conducted on some challenging
competitive and cooperative multi-agent game environments, including an
advanced version of the Star-Craft multi-agent challenge (i.e., SMACv2), which
demonstrates the effectiveness of our proposed algorithm compared to existing
state-of-the-art multi-agent IL algorithms.
Related papers
- Variational Offline Multi-agent Skill Discovery [43.869625428099425]
We propose two novel auto-encoder schemes to simultaneously capture subgroup- and temporal-level abstractions and form multi-agent skills.
Our method can be applied to offline multi-task data, and the discovered subgroup skills can be transferred across relevant tasks without retraining.
arXiv Detail & Related papers (2024-05-26T00:24:46Z) - MADiff: Offline Multi-agent Learning with Diffusion Models [79.18130544233794]
Diffusion model (DM) recently achieved huge success in various scenarios including offline reinforcement learning.
We propose MADiff, a novel generative multi-agent learning framework to tackle this problem.
Our experiments show the superior performance of MADiff compared to baseline algorithms in a wide range of multi-agent learning tasks.
arXiv Detail & Related papers (2023-05-27T02:14:09Z) - Learning Reward Machines in Cooperative Multi-Agent Tasks [75.79805204646428]
This paper presents a novel approach to Multi-Agent Reinforcement Learning (MARL)
It combines cooperative task decomposition with the learning of reward machines (RMs) encoding the structure of the sub-tasks.
The proposed method helps deal with the non-Markovian nature of the rewards in partially observable environments.
arXiv Detail & Related papers (2023-03-24T15:12:28Z) - Residual Q-Networks for Value Function Factorizing in Multi-Agent
Reinforcement Learning [0.0]
We propose a novel concept of Residual Q-Networks (RQNs) for Multi-Agent Reinforcement Learning (MARL)
The RQN learns to transform the individual Q-value trajectories in a way that preserves the Individual-Global-Max criteria (IGM)
The proposed method converges faster, with increased stability and shows robust performance in a wider family of environments.
arXiv Detail & Related papers (2022-05-30T16:56:06Z) - Local Advantage Networks for Cooperative Multi-Agent Reinforcement
Learning [1.1879716317856945]
This paper presents a new type of reinforcement learning algorithm for cooperative partially observable environments.
We use a dueling architecture to learn for each agent a decentralized best-response policies via individual advantage functions.
Evaluation on the StarCraft II multi-agent challenge benchmark shows that LAN reaches state-of-the-art performance.
arXiv Detail & Related papers (2021-12-23T10:55:33Z) - Locality Matters: A Scalable Value Decomposition Approach for
Cooperative Multi-Agent Reinforcement Learning [52.7873574425376]
Cooperative multi-agent reinforcement learning (MARL) faces significant scalability issues due to state and action spaces that are exponentially large in the number of agents.
We propose a novel, value-based multi-agent algorithm called LOMAQ, which incorporates local rewards in the Training Decentralized Execution paradigm.
arXiv Detail & Related papers (2021-09-22T10:08:15Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Multi-Agent Determinantal Q-Learning [39.79718674655209]
We propose multi-agent determinantal Q-learning. Q-DPP promotes agents to acquire diverse behavioral models.
We demonstrate that Q-DPP generalizes major solutions including VDN, QMIX, and QTRAN on decentralizable cooperative tasks.
arXiv Detail & Related papers (2020-06-02T09:32:48Z) - Towards Understanding Cooperative Multi-Agent Q-Learning with Value
Factorization [28.89692989420673]
We formalize a multi-agent fitted Q-iteration framework for analyzing factorized multi-agent Q-learning.
Through further analysis, we find that on-policy training or richer joint value function classes can improve its local or global convergence properties.
arXiv Detail & Related papers (2020-05-31T19:14:03Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z) - Monotonic Value Function Factorisation for Deep Multi-Agent
Reinforcement Learning [55.20040781688844]
QMIX is a novel value-based method that can train decentralised policies in a centralised end-to-end fashion.
We propose the StarCraft Multi-Agent Challenge (SMAC) as a new benchmark for deep multi-agent reinforcement learning.
arXiv Detail & Related papers (2020-03-19T16:51:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.