Policy Optimization for Markov Games: Unified Framework and Faster
Convergence
- URL: http://arxiv.org/abs/2206.02640v1
- Date: Mon, 6 Jun 2022 14:23:13 GMT
- Title: Policy Optimization for Markov Games: Unified Framework and Faster
Convergence
- Authors: Runyu Zhang, Qinghua Liu, Huan Wang, Caiming Xiong, Na Li, Yu Bai
- Abstract summary: We show that the state-wise average policy of this algorithm converges to an approximate Nash equilibrium (NE) of the game.
We extend this algorithm to multi-player general Markov Games and show a $mathcalwidetildeO(T-1/2)$ convergence rate to Correlated Equilibria (CCE)
- Score: 81.3266426402464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies policy optimization algorithms for multi-agent
reinforcement learning. We begin by proposing an algorithm framework for
two-player zero-sum Markov Games in the full-information setting, where each
iteration consists of a policy update step at each state using a certain matrix
game algorithm, and a value update step with a certain learning rate. This
framework unifies many existing and new policy optimization algorithms. We show
that the state-wise average policy of this algorithm converges to an
approximate Nash equilibrium (NE) of the game, as long as the matrix game
algorithms achieve low weighted regret at each state, with respect to weights
determined by the speed of the value updates. Next, we show that this framework
instantiated with the Optimistic Follow-The-Regularized-Leader (OFTRL)
algorithm at each state (and smooth value updates) can find an
$\mathcal{\widetilde{O}}(T^{-5/6})$ approximate NE in $T$ iterations, which
improves over the current best $\mathcal{\widetilde{O}}(T^{-1/2})$ rate of
symmetric policy optimization type algorithms. We also extend this algorithm to
multi-player general-sum Markov Games and show an
$\mathcal{\widetilde{O}}(T^{-3/4})$ convergence rate to Coarse Correlated
Equilibria (CCE). Finally, we provide a numerical example to verify our theory
and investigate the importance of smooth value updates, and find that using
"eager" value updates instead (equivalent to the independent natural policy
gradient algorithm) may significantly slow down the convergence, even on a
simple game with $H=2$ layers.
Related papers
- Can We Find Nash Equilibria at a Linear Rate in Markov Games? [95.10091348976779]
We study decentralized learning in two-player zero-sum discounted Markov games.
The goal is to design a policy optimization algorithm for either agent satisfying two properties.
arXiv Detail & Related papers (2023-03-03T02:40:26Z) - Provably Efficient Fictitious Play Policy Optimization for Zero-Sum
Markov Games with Structured Transitions [145.54544979467872]
We propose and analyze new fictitious play policy optimization algorithms for zero-sum Markov games with structured but unknown transitions.
We prove tight $widetildemathcalO(sqrtK)$ regret bounds after $K$ episodes in a two-agent competitive game scenario.
Our algorithms feature a combination of Upper Confidence Bound (UCB)-type optimism and fictitious play under the scope of simultaneous policy optimization.
arXiv Detail & Related papers (2022-07-25T18:29:16Z) - Towards General Function Approximation in Zero-Sum Markov Games [126.58493169301012]
This paper considers two-player zero-sum finite-horizon Markov games with simultaneous moves.
Provably efficient algorithms for both decoupled and coordinated settings are developed.
arXiv Detail & Related papers (2021-07-30T15:25:13Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Last-iterate Convergence of Decentralized Optimistic Gradient
Descent/Ascent in Infinite-horizon Competitive Markov Games [37.70703888365849]
We study infinite-horizon discounted two-player zero-sum Markov games.
We develop a decentralized algorithm that converges to the set of Nash equilibria under self-play.
arXiv Detail & Related papers (2021-02-08T21:45:56Z) - Learning Zero-Sum Simultaneous-Move Markov Games Using Function
Approximation and Correlated Equilibrium [116.56359444619441]
We develop provably efficient reinforcement learning algorithms for two-player zero-sum finite-horizon Markov games.
In the offline setting, we control both players and aim to find the Nash Equilibrium by minimizing the duality gap.
In the online setting, we control a single player playing against an arbitrary opponent and aim to minimize the regret.
arXiv Detail & Related papers (2020-02-17T17:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.