From Average-Iterate to Last-Iterate Convergence in Games: A Reduction and Its Applications
- URL: http://arxiv.org/abs/2506.03464v1
- Date: Wed, 04 Jun 2025 00:24:14 GMT
- Title: From Average-Iterate to Last-Iterate Convergence in Games: A Reduction and Its Applications
- Authors: Yang Cai, Haipeng Luo, Chen-Yu Wei, Weiqiang Zheng,
- Abstract summary: We show that for a large family of games, there exists a simple black-box reduction that transforms the average iterates of an uncoupled learning dynamics into the last iterates of a new uncoupled learning dynamics.<n>We obtain new state-of-the-art last-iterate convergence rates for uncoupled learning dynamics in two-player zero-sum normal-form games.
- Score: 44.95137108337898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The convergence of online learning algorithms in games under self-play is a fundamental question in game theory and machine learning. Among various notions of convergence, last-iterate convergence is particularly desirable, as it reflects the actual decisions made by the learners and captures the day-to-day behavior of the learning dynamics. While many algorithms are known to converge in the average-iterate, achieving last-iterate convergence typically requires considerably more effort in both the design and the analysis of the algorithm. Somewhat surprisingly, we show in this paper that for a large family of games, there exists a simple black-box reduction that transforms the average iterates of an uncoupled learning dynamics into the last iterates of a new uncoupled learning dynamics, thus also providing a reduction from last-iterate convergence to average-iterate convergence. Our reduction applies to games where each player's utility is linear in both their own strategy and the joint strategy of all opponents. This family includes two-player bimatrix games and generalizations such as multi-player polymatrix games. By applying our reduction to the Optimistic Multiplicative Weights Update algorithm, we obtain new state-of-the-art last-iterate convergence rates for uncoupled learning dynamics in two-player zero-sum normal-form games: (1) an $O(\frac{\log d}{T})$ last-iterate convergence rate under gradient feedback, representing an exponential improvement in the dependence on the dimension $d$ (i.e., the maximum number of actions available to either player); and (2) an $\widetilde{O}(d^{\frac{1}{5}} T^{-\frac{1}{5}})$ last-iterate convergence rate under bandit feedback, improving upon the previous best rates of $\widetilde{O}(\sqrt{d} T^{-\frac{1}{8}})$ and $\widetilde{O}(\sqrt{d} T^{-\frac{1}{6}})$.
Related papers
- On Separation Between Best-Iterate, Random-Iterate, and Last-Iterate Convergence of Learning in Games [71.73971094342349]
Non-ergodic convergence of learning dynamics in games is widely studied because of its importance in both theory and practice.<n>Recent work showed that a broad class of learning dynamics, including Optimistic Multiplicative Weights Update, can exhibit arbitrarily slow last-iterate convergence.<n>We show that OMWU achieves an $O(T-1/6)$ best-iterate convergence rate, in stark contrast to its slow last-iterate convergence in the same class of games.
arXiv Detail & Related papers (2025-03-04T17:49:24Z) - Fast Last-Iterate Convergence of Learning in Games Requires Forgetful Algorithms [71.73971094342349]
Self-play via online learning is one of the premier ways to solve large-scale two-player zero-sum games.<n>We show that OMWU offers several advantages including logarithmic dependence on the size of the payoff matrix.<n>We prove that a broad class of algorithms that do not forget the past quickly all suffer the same issue.
arXiv Detail & Related papers (2024-06-15T13:26:17Z) - Last-Iterate Convergence Properties of Regret-Matching Algorithms in Games [71.73971094342349]
We study last-iterate convergence properties of algorithms for solving two-player zero-sum games based on Regret$+$ (RM$+$)<n>Our analysis builds on a new characterization of the geometric structure of the limit points of our algorithms, marking a significant departure from most of the literature on last-iterate convergence.
arXiv Detail & Related papers (2023-11-01T17:34:58Z) - Doubly Optimal No-Regret Learning in Monotone Games [10.760195409078591]
We propose the first doubly optimal no-regret learning algorithm for smooth monotone games.
Our algorithm achieves both (i) the optimal $O(sqrtT)$ regret in the adversarial setting under smooth and convex loss functions and (ii) the optimal $O(frac1T)$ last-iterate convergence rate to a Nash equilibrium.
arXiv Detail & Related papers (2023-01-30T17:55:53Z) - Global Nash Equilibrium in Non-convex Multi-player Game: Theory and
Algorithms [66.8634598612777]
We show that Nash equilibrium (NE) is acceptable to all players in a multi-player game.
We also show that no one can benefit unilaterally from the general theory step by step.
arXiv Detail & Related papers (2023-01-19T11:36:50Z) - Policy Optimization for Markov Games: Unified Framework and Faster
Convergence [81.3266426402464]
We show that the state-wise average policy of this algorithm converges to an approximate Nash equilibrium (NE) of the game.
We extend this algorithm to multi-player general Markov Games and show a $mathcalwidetildeO(T-1/2)$ convergence rate to Correlated Equilibria (CCE)
arXiv Detail & Related papers (2022-06-06T14:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.