$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in
Two-Player Zero-Sum Markov Games
- URL: http://arxiv.org/abs/2209.12430v1
- Date: Mon, 26 Sep 2022 05:35:44 GMT
- Title: $O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in
Two-Player Zero-Sum Markov Games
- Authors: Yuepeng Yang, Cong Ma
- Abstract summary: We find an $O(T-1)$-approximate Nash equilibrium in $T$ for two-player zero-sum Markov games with full information.
This improves the $tildeO(T-5/6)$ convergence rate recently shown in the paper Zhang et al (2022).
- Score: 10.915751393257011
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We prove that optimistic-follow-the-regularized-leader (OFTRL), together with
smooth value updates, finds an $O(T^{-1})$-approximate Nash equilibrium in $T$
iterations for two-player zero-sum Markov games with full information. This
improves the $\tilde{O}(T^{-5/6})$ convergence rate recently shown in the paper
Zhang et al (2022). The refined analysis hinges on two essential ingredients.
First, the sum of the regrets of the two players, though not necessarily
non-negative as in normal-form games, is approximately non-negative in Markov
games. This property allows us to bound the second-order path lengths of the
learning dynamics. Second, we prove a tighter algebraic inequality regarding
the weights deployed by OFTRL that shaves an extra $\log T$ factor. This
crucial improvement enables the inductive analysis that leads to the final
$O(T^{-1})$ rate.
Related papers
- Games played by Exponential Weights Algorithms [0.0]
We consider a repeated interaction in discrete time, where each player uses an exponential weights algorithm characterized by an initial mixed action and a fixed learning rate.
We show that whenever a strict Nash equilibrium exists, the probability to play a strict Nash equilibrium at the next stage converges almost surely to 0 or 1.
arXiv Detail & Related papers (2024-07-09T08:49:51Z) - Fast Last-Iterate Convergence of Learning in Games Requires Forgetful Algorithms [71.73971094342349]
Self-play via online learning is one of the premier ways to solve large-scale two-player zero-sum games.
We show that OMWU offers several advantages including logarithmic dependence on the size of the payoff matrix.
We prove that a broad class of algorithms that do not forget the past quickly all suffer the same issue.
arXiv Detail & Related papers (2024-06-15T13:26:17Z) - $\widetilde{O}(T^{-1})$ Convergence to (Coarse) Correlated Equilibria in Full-Information General-Sum Markov Games [8.215874655947335]
We show that an optimistic-follow-the-regularized-leader algorithm can find $widetildeO(T-1)$-approximate iterations in full-information general-sum Markov games within $T$.
arXiv Detail & Related papers (2024-02-02T20:40:27Z) - Uncoupled and Convergent Learning in Two-Player Zero-Sum Markov Games
with Bandit Feedback [49.1061436241109]
We focus on developing an algorithm that is uncoupled, convergent, and rational, with non-asymptotic convergence rates.
Our algorithm is related to [Chen et al., 2021, Cen et al., 2021] and also builds on the entropy regularization technique.
arXiv Detail & Related papers (2023-03-05T18:08:54Z) - Minimax-Optimal Multi-Agent RL in Zero-Sum Markov Games With a
Generative Model [50.38446482252857]
Two-player zero-sum Markov games are arguably the most basic setting in multi-agent reinforcement learning.
We develop a learning algorithm that learns an $varepsilon$-approximate Markov NE policy using $$ widetildeObigg.
We derive a refined regret bound for FTRL that makes explicit the role of variance-type quantities.
arXiv Detail & Related papers (2022-08-22T17:24:55Z) - Model-Based Reinforcement Learning for Offline Zero-Sum Markov Games [18.832436856339587]
This paper makes progress towards learning Nash equilibria in two-player zero-sum Markov games from offline data.
We propose a pessimistic model-based algorithm with Bernstein-style lower confidence bounds -- called VI-LCB-Game.
arXiv Detail & Related papers (2022-06-08T17:58:06Z) - Near-Optimal Learning of Extensive-Form Games with Imperfect Information [54.55092907312749]
We present the first line of algorithms that require only $widetildemathcalO((XA+YB)/varepsilon2)$ episodes of play to find an $varepsilon$-approximate Nash equilibrium in two-player zero-sum games.
This improves upon the best known sample complexity of $widetildemathcalO((X2A+Y2B)/varepsilon2)$ by a factor of $widetildemathcalO(maxX,
arXiv Detail & Related papers (2022-02-03T18:18:28Z) - Almost Optimal Algorithms for Two-player Markov Games with Linear
Function Approximation [92.99933928528797]
We study reinforcement learning for two-player zero-sum Markov games with simultaneous moves.
We propose an algorithm Nash-UCRL-VTR based on the principle "Optimism-in-Face-of-Uncertainty"
We show that Nash-UCRL-VTR can provably achieve an $tildeO(dHsqrtT)$ regret, where $d$ is the linear function dimension.
arXiv Detail & Related papers (2021-02-15T09:09:16Z) - Learning Zero-Sum Simultaneous-Move Markov Games Using Function
Approximation and Correlated Equilibrium [116.56359444619441]
We develop provably efficient reinforcement learning algorithms for two-player zero-sum finite-horizon Markov games.
In the offline setting, we control both players and aim to find the Nash Equilibrium by minimizing the duality gap.
In the online setting, we control a single player playing against an arbitrary opponent and aim to minimize the regret.
arXiv Detail & Related papers (2020-02-17T17:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.