Mixed Q-Functionals: Advancing Value-Based Methods in Cooperative MARL
with Continuous Action Domains
- URL: http://arxiv.org/abs/2402.07752v1
- Date: Mon, 12 Feb 2024 16:21:50 GMT
- Title: Mixed Q-Functionals: Advancing Value-Based Methods in Cooperative MARL
with Continuous Action Domains
- Authors: Yasin Findik and S. Reza Ahmadzadeh
- Abstract summary: We propose a novel multi-agent value-based algorithm, Mixed Q-Functionals (MQF), inspired by the idea of Q-Functionals.
Our algorithm fosters collaboration among agents by mixing their action-values.
Our empirical findings reveal that MQF outperforms four variants of Deep Deterministic Policy Gradient.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Tackling multi-agent learning problems efficiently is a challenging task in
continuous action domains. While value-based algorithms excel in sample
efficiency when applied to discrete action domains, they are usually
inefficient when dealing with continuous actions. Policy-based algorithms, on
the other hand, attempt to address this challenge by leveraging critic networks
for guiding the learning process and stabilizing the gradient estimation. The
limitations in the estimation of true return and falling into local optima in
these methods result in inefficient and often sub-optimal policies. In this
paper, we diverge from the trend of further enhancing critic networks, and
focus on improving the effectiveness of value-based methods in multi-agent
continuous domains by concurrently evaluating numerous actions. We propose a
novel multi-agent value-based algorithm, Mixed Q-Functionals (MQF), inspired
from the idea of Q-Functionals, that enables agents to transform their states
into basis functions. Our algorithm fosters collaboration among agents by
mixing their action-values. We evaluate the efficacy of our algorithm in six
cooperative multi-agent scenarios. Our empirical findings reveal that MQF
outperforms four variants of Deep Deterministic Policy Gradient through rapid
action evaluation and increased sample efficiency.
Related papers
- Mimicking Better by Matching the Approximate Action Distribution [48.95048003354255]
We introduce MAAD, a novel, sample-efficient on-policy algorithm for Imitation Learning from Observations.
We show that it requires considerable fewer interactions to achieve expert performance, outperforming current state-of-the-art on-policy methods.
arXiv Detail & Related papers (2023-06-16T12:43:47Z) - Context-Aware Bayesian Network Actor-Critic Methods for Cooperative
Multi-Agent Reinforcement Learning [7.784991832712813]
We introduce a Bayesian network to inaugurate correlations between agents' action selections in their joint policy.
We develop practical algorithms to learn the context-aware Bayesian network policies.
Empirical results on a range of MARL benchmarks show the benefits of our approach.
arXiv Detail & Related papers (2023-06-02T21:22:27Z) - Solving Continuous Control via Q-learning [54.05120662838286]
We show that a simple modification of deep Q-learning largely alleviates issues with actor-critic methods.
By combining bang-bang action discretization with value decomposition, framing single-agent control as cooperative multi-agent reinforcement learning (MARL), this simple critic-only approach matches performance of state-of-the-art continuous actor-critic methods.
arXiv Detail & Related papers (2022-10-22T22:55:50Z) - Local Advantage Actor-Critic for Robust Multi-Agent Deep Reinforcement
Learning [19.519440854957633]
We propose a new multi-agent policy gradient method called Robust Local Advantage (ROLA) Actor-Critic.
ROLA allows each agent to learn an individual action-value function as a local critic as well as ameliorating environment non-stationarity.
We show ROLA's robustness and effectiveness over a number of state-of-the-art multi-agent policy gradient algorithms.
arXiv Detail & Related papers (2021-10-16T19:03:34Z) - Decomposed Soft Actor-Critic Method for Cooperative Multi-Agent
Reinforcement Learning [10.64928897082273]
Experimental results demonstrate that mSAC significantly outperforms policy-based approach COMA.
In addition, mSAC achieves pretty good results on large action space tasks, such as 2c_vs_64zg and MMM2.
arXiv Detail & Related papers (2021-04-14T07:02:40Z) - Softmax with Regularization: Better Value Estimation in Multi-Agent
Reinforcement Learning [72.28520951105207]
Overestimation in $Q$-learning is an important problem that has been extensively studied in single-agent reinforcement learning.
We propose a novel regularization-based update scheme that penalizes large joint action-values deviating from a baseline.
We show that our method provides a consistent performance improvement on a set of challenging StarCraft II micromanagement tasks.
arXiv Detail & Related papers (2021-03-22T14:18:39Z) - Modeling the Interaction between Agents in Cooperative Multi-Agent
Reinforcement Learning [2.9360071145551068]
We propose a novel cooperative MARL algorithm named as interactive actor-critic(IAC)
IAC models the interaction of agents from perspectives of policy and value function.
We extend the value decomposition methods to continuous control tasks and evaluate IAC on benchmark tasks including classic control and multi-agent particle environments.
arXiv Detail & Related papers (2021-02-10T01:58:28Z) - Zeroth-Order Supervised Policy Improvement [94.0748002906652]
Policy gradient (PG) algorithms have been widely used in reinforcement learning (RL)
We propose Zeroth-Order Supervised Policy Improvement (ZOSPI)
ZOSPI exploits the estimated value function $Q$ globally while preserving the local exploitation of the PG methods.
arXiv Detail & Related papers (2020-06-11T16:49:23Z) - FACMAC: Factored Multi-Agent Centralised Policy Gradients [103.30380537282517]
We propose FACtored Multi-Agent Centralised policy gradients (FACMAC)
It is a new method for cooperative multi-agent reinforcement learning in both discrete and continuous action spaces.
We evaluate FACMAC on variants of the multi-agent particle environments, a novel multi-agent MuJoCo benchmark, and a challenging set of StarCraft II micromanagement tasks.
arXiv Detail & Related papers (2020-03-14T21:29:09Z) - Discrete Action On-Policy Learning with Action-Value Critic [72.20609919995086]
Reinforcement learning (RL) in discrete action space is ubiquitous in real-world applications, but its complexity grows exponentially with the action-space dimension.
We construct a critic to estimate action-value functions, apply it on correlated actions, and combine these critic estimated action values to control the variance of gradient estimation.
These efforts result in a new discrete action on-policy RL algorithm that empirically outperforms related on-policy algorithms relying on variance control techniques.
arXiv Detail & Related papers (2020-02-10T04:23:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.