Greedy based Value Representation for Optimal Coordination in
Multi-agent Reinforcement Learning
- URL: http://arxiv.org/abs/2211.12075v1
- Date: Tue, 22 Nov 2022 08:14:50 GMT
- Title: Greedy based Value Representation for Optimal Coordination in
Multi-agent Reinforcement Learning
- Authors: Lipeng Wan, Zeyang Liu, Xingyu Chen, Xuguang Lan, Nanning Zheng
- Abstract summary: We derive the expression of the joint Q value function of LVD and MVD.
To ensure optimal consistency, the optimal node is required to be the unique STN.
Our method outperforms state-of-the-art baselines in experiments on various benchmarks.
- Score: 64.05646120624287
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the representation limitation of the joint Q value function,
multi-agent reinforcement learning methods with linear value decomposition
(LVD) or monotonic value decomposition (MVD) suffer from relative
overgeneralization. As a result, they can not ensure optimal consistency (i.e.,
the correspondence between individual greedy actions and the maximal true Q
value). In this paper, we derive the expression of the joint Q value function
of LVD and MVD. According to the expression, we draw a transition diagram,
where each self-transition node (STN) is a possible convergence. To ensure
optimal consistency, the optimal node is required to be the unique STN.
Therefore, we propose the greedy-based value representation (GVR), which turns
the optimal node into an STN via inferior target shaping and further eliminates
the non-optimal STNs via superior experience replay. In addition, GVR achieves
an adaptive trade-off between optimality and stability. Our method outperforms
state-of-the-art baselines in experiments on various benchmarks. Theoretical
proofs and empirical results on matrix games demonstrate that GVR ensures
optimal consistency under sufficient exploration.
Related papers
- Matrix Completion with Graph Information: A Provable Nonconvex Optimization Approach [5.235925587710112]
We consider the problem of matrix completion with graphs as side information depicting the interrelations between variables.
We propose in this paper a graph regularized matrix completion algorithm called GSGD, based on preconditioned projected descent approach.
arXiv Detail & Related papers (2025-02-12T16:21:01Z) - On the Convergence of DP-SGD with Adaptive Clipping [56.24689348875711]
Gradient Descent with gradient clipping is a powerful technique for enabling differentially private optimization.
This paper provides the first comprehensive convergence analysis of SGD with quantile clipping (QC-SGD)
We show how QC-SGD suffers from a bias problem similar to constant-threshold clipped SGD but can be mitigated through a carefully designed quantile and step size schedule.
arXiv Detail & Related papers (2024-12-27T20:29:47Z) - Taming Nonconvex Stochastic Mirror Descent with General Bregman
Divergence [25.717501580080846]
This paper revisits the convergence of gradient Forward Mirror (SMD) in the contemporary non optimization setting.
For the training, we develop provably convergent algorithms for the problem of linear networks.
arXiv Detail & Related papers (2024-02-27T17:56:49Z) - Adaptive, Doubly Optimal No-Regret Learning in Strongly Monotone and Exp-Concave Games with Gradient Feedback [75.29048190099523]
Online gradient descent (OGD) is well known to be doubly optimal under strong convexity or monotonicity assumptions.
In this paper, we design a fully adaptive OGD algorithm, textsfAdaOGD, that does not require a priori knowledge of these parameters.
arXiv Detail & Related papers (2023-10-21T18:38:13Z) - Regularization and Variance-Weighted Regression Achieves Minimax
Optimality in Linear MDPs: Theory and Practice [79.48432795639403]
Mirror descent value iteration (MDVI) is an abstraction of Kullback-Leibler (KL) and entropy-regularized reinforcement learning (RL)
We study MDVI with linear function approximation through its sample complexity required to identify an $varepsilon$-optimal policy.
We present Variance-Weighted Least-Squares MDVI, the first theoretical algorithm that achieves nearly minimax optimal sample complexity for infinite-horizon linear MDPs.
arXiv Detail & Related papers (2023-05-22T16:13:05Z) - Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium
Learning from Offline Datasets [101.5329678997916]
We study episodic two-player zero-sum Markov games (MGs) in the offline setting.
The goal is to find an approximate Nash equilibrium (NE) policy pair based on a dataset collected a priori.
arXiv Detail & Related papers (2022-02-15T15:39:30Z) - A Unified Off-Policy Evaluation Approach for General Value Function [131.45028999325797]
General Value Function (GVF) is a powerful tool to represent both predictive and retrospective knowledge in reinforcement learning (RL)
In this paper, we propose a new algorithm called GenTD for off-policy GVFs evaluation.
We show that GenTD learns multiple interrelated multi-dimensional GVFs as efficiently as a single canonical scalar value function.
arXiv Detail & Related papers (2021-07-06T16:20:34Z) - An improved convergence analysis for decentralized online stochastic
non-convex optimization [17.386715847732468]
In this paper, we show that a technique called GT-Loakjasiewics (GT-Loakjasiewics) satisfies the existing condition GT-Loakjasiewics (GT-Loakjasiewics) satisfies the current best convergence rates.
The results are not only immediately applicable but also the currently known best convergence rates.
arXiv Detail & Related papers (2020-08-10T15:29:13Z) - The Strength of Nesterov's Extrapolation in the Individual Convergence
of Nonsmooth Optimization [0.0]
We prove that Nesterov's extrapolation has the strength to make the individual convergence of gradient descent methods optimal for nonsmooth problems.
We give an extension of the derived algorithms to solve regularized learning tasks with nonsmooth losses in settings.
Our method is applicable as an efficient tool for solving large-scale $l$1-regularized hinge-loss learning problems.
arXiv Detail & Related papers (2020-06-08T03:35:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.