Doubly Optimal No-Regret Learning in Monotone Games
- URL: http://arxiv.org/abs/2301.13120v2
- Date: Mon, 4 Sep 2023 18:07:17 GMT
- Title: Doubly Optimal No-Regret Learning in Monotone Games
- Authors: Yang Cai, Weiqiang Zheng
- Abstract summary: We propose the first doubly optimal no-regret learning algorithm for smooth monotone games.
Our algorithm achieves both (i) the optimal $O(sqrtT)$ regret in the adversarial setting under smooth and convex loss functions and (ii) the optimal $O(frac1T)$ last-iterate convergence rate to a Nash equilibrium.
- Score: 10.760195409078591
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider online learning in multi-player smooth monotone games. Existing
algorithms have limitations such as (1) being only applicable to strongly
monotone games; (2) lacking the no-regret guarantee; (3) having only asymptotic
or slow $O(\frac{1}{\sqrt{T}})$ last-iterate convergence rate to a Nash
equilibrium. While the $O(\frac{1}{\sqrt{T}})$ rate is tight for a large class
of algorithms including the well-studied extragradient algorithm and optimistic
gradient algorithm, it is not optimal for all gradient-based algorithms.
We propose the accelerated optimistic gradient (AOG) algorithm, the first
doubly optimal no-regret learning algorithm for smooth monotone games. Namely,
our algorithm achieves both (i) the optimal $O(\sqrt{T})$ regret in the
adversarial setting under smooth and convex loss functions and (ii) the optimal
$O(\frac{1}{T})$ last-iterate convergence rate to a Nash equilibrium in
multi-player smooth monotone games. As a byproduct of the accelerated
last-iterate convergence rate, we further show that each player suffers only an
$O(\log T)$ individual worst-case dynamic regret, providing an exponential
improvement over the previous state-of-the-art $O(\sqrt{T})$ bound.
Related papers
- Fast Last-Iterate Convergence of Learning in Games Requires Forgetful Algorithms [71.73971094342349]
Self-play via online learning is one of the premier ways to solve large-scale two-player zero-sum games.
We show that OMWU offers several advantages including logarithmic dependence on the size of the payoff matrix.
We prove that a broad class of algorithms that do not forget the past quickly all suffer the same issue.
arXiv Detail & Related papers (2024-06-15T13:26:17Z) - Last-Iterate Convergence Properties of Regret-Matching Algorithms in
Games [72.43065138581589]
We study the last-iterate convergence properties of various popular variants of RM$+$.
We show numerically that several practical variants such as simultaneous RM$+$, alternating RM$+$, and predictive RM$+$, all lack last-iterate convergence guarantees.
We then prove that recent variants of these algorithms based on a smoothing technique do enjoy last-iterate convergence.
arXiv Detail & Related papers (2023-11-01T17:34:58Z) - Adaptive, Doubly Optimal No-Regret Learning in Strongly Monotone and Exp-Concave Games with Gradient Feedback [75.29048190099523]
Online gradient descent (OGD) is well known to be doubly optimal under strong convexity or monotonicity assumptions.
In this paper, we design a fully adaptive OGD algorithm, textsfAdaOGD, that does not require a priori knowledge of these parameters.
arXiv Detail & Related papers (2023-10-21T18:38:13Z) - Policy Optimization for Markov Games: Unified Framework and Faster
Convergence [81.3266426402464]
We show that the state-wise average policy of this algorithm converges to an approximate Nash equilibrium (NE) of the game.
We extend this algorithm to multi-player general Markov Games and show a $mathcalwidetildeO(T-1/2)$ convergence rate to Correlated Equilibria (CCE)
arXiv Detail & Related papers (2022-06-06T14:23:13Z) - Doubly Optimal No-Regret Online Learning in Strongly Monotone Games with Bandit Feedback [29.553652241608997]
We study the class of textitsmooth and strongly monotone games and study optimal no-regret learning therein.
We first construct a new bandit learning algorithm and show that it achieves the single-agent optimal regret of $tildeTheta(nsqrtT)$.
Our results thus settle this open problem and contribute to the broad landscape of bandit game-theoretical learning.
arXiv Detail & Related papers (2021-12-06T08:27:54Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Almost Optimal Algorithms for Two-player Markov Games with Linear
Function Approximation [92.99933928528797]
We study reinforcement learning for two-player zero-sum Markov games with simultaneous moves.
We propose an algorithm Nash-UCRL-VTR based on the principle "Optimism-in-Face-of-Uncertainty"
We show that Nash-UCRL-VTR can provably achieve an $tildeO(dHsqrtT)$ regret, where $d$ is the linear function dimension.
arXiv Detail & Related papers (2021-02-15T09:09:16Z) - Tight last-iterate convergence rates for no-regret learning in
multi-player games [31.604950465209335]
We show that the optimistic gradient (OG) algorithm with a constant step-size, which is no-regret, achieves a last-iterate rate of $O (1/sqrtT)$ in smooth monotone games.
We also show that the $O (1/sqrtT)$ rate is tight for all $p$-SCLI algorithms, which includes OG as a special case.
arXiv Detail & Related papers (2020-10-26T17:06:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.