Uncoupled and Convergent Learning in Two-Player Zero-Sum Markov Games
with Bandit Feedback
- URL: http://arxiv.org/abs/2303.02738v2
- Date: Wed, 8 Nov 2023 19:47:17 GMT
- Title: Uncoupled and Convergent Learning in Two-Player Zero-Sum Markov Games
with Bandit Feedback
- Authors: Yang Cai, Haipeng Luo, Chen-Yu Wei, Weiqiang Zheng
- Abstract summary: We focus on developing an algorithm that is uncoupled, convergent, and rational, with non-asymptotic convergence rates.
Our algorithm is related to [Chen et al., 2021, Cen et al., 2021] and also builds on the entropy regularization technique.
- Score: 49.1061436241109
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We revisit the problem of learning in two-player zero-sum Markov games,
focusing on developing an algorithm that is uncoupled, convergent, and
rational, with non-asymptotic convergence rates. We start from the case of
stateless matrix game with bandit feedback as a warm-up, showing an
$O(t^{-\frac{1}{8}})$ last-iterate convergence rate. To the best of our
knowledge, this is the first result that obtains finite last-iterate
convergence rate given access to only bandit feedback. We extend our result to
the case of irreducible Markov games, providing a last-iterate convergence rate
of $O(t^{-\frac{1}{9+\varepsilon}})$ for any $\varepsilon>0$. Finally, we study
Markov games without any assumptions on the dynamics, and show a path
convergence rate, which is a new notion of convergence we defined, of
$O(t^{-\frac{1}{10}})$. Our algorithm removes the coordination and prior
knowledge requirement of [Wei et al., 2021], which pursued the same goals as us
for irreducible Markov games. Our algorithm is related to [Chen et al., 2021,
Cen et al., 2021] and also builds on the entropy regularization technique.
However, we remove their requirement of communications on the entropy values,
making our algorithm entirely uncoupled.
Related papers
- Fast Last-Iterate Convergence of Learning in Games Requires Forgetful Algorithms [71.73971094342349]
Self-play via online learning is one of the premier ways to solve large-scale two-player zero-sum games.
We show that OMWU offers several advantages including logarithmic dependence on the size of the payoff matrix.
We prove that a broad class of algorithms that do not forget the past quickly all suffer the same issue.
arXiv Detail & Related papers (2024-06-15T13:26:17Z) - $\widetilde{O}(T^{-1})$ Convergence to (Coarse) Correlated Equilibria in Full-Information General-Sum Markov Games [8.215874655947335]
We show that an optimistic-follow-the-regularized-leader algorithm can find $widetildeO(T-1)$-approximate iterations in full-information general-sum Markov games within $T$.
arXiv Detail & Related papers (2024-02-02T20:40:27Z) - Last-Iterate Convergence Properties of Regret-Matching Algorithms in
Games [72.43065138581589]
We study the last-iterate convergence properties of various popular variants of RM$+$.
We show numerically that several practical variants such as simultaneous RM$+$, alternating RM$+$, and predictive RM$+$, all lack last-iterate convergence guarantees.
We then prove that recent variants of these algorithms based on a smoothing technique do enjoy last-iterate convergence.
arXiv Detail & Related papers (2023-11-01T17:34:58Z) - $O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in
Two-Player Zero-Sum Markov Games [10.915751393257011]
We find an $O(T-1)$-approximate Nash equilibrium in $T$ for two-player zero-sum Markov games with full information.
This improves the $tildeO(T-5/6)$ convergence rate recently shown in the paper Zhang et al (2022).
arXiv Detail & Related papers (2022-09-26T05:35:44Z) - Policy Optimization for Markov Games: Unified Framework and Faster
Convergence [81.3266426402464]
We show that the state-wise average policy of this algorithm converges to an approximate Nash equilibrium (NE) of the game.
We extend this algorithm to multi-player general Markov Games and show a $mathcalwidetildeO(T-1/2)$ convergence rate to Correlated Equilibria (CCE)
arXiv Detail & Related papers (2022-06-06T14:23:13Z) - Towards General Function Approximation in Zero-Sum Markov Games [126.58493169301012]
This paper considers two-player zero-sum finite-horizon Markov games with simultaneous moves.
Provably efficient algorithms for both decoupled and coordinated settings are developed.
arXiv Detail & Related papers (2021-07-30T15:25:13Z) - Learning Zero-Sum Simultaneous-Move Markov Games Using Function
Approximation and Correlated Equilibrium [116.56359444619441]
We develop provably efficient reinforcement learning algorithms for two-player zero-sum finite-horizon Markov games.
In the offline setting, we control both players and aim to find the Nash Equilibrium by minimizing the duality gap.
In the online setting, we control a single player playing against an arbitrary opponent and aim to minimize the regret.
arXiv Detail & Related papers (2020-02-17T17:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.