Optimism Without Regularization: Constant Regret in Zero-Sum Games
- URL: http://arxiv.org/abs/2506.16736v1
- Date: Fri, 20 Jun 2025 04:10:51 GMT
- Title: Optimism Without Regularization: Constant Regret in Zero-Sum Games
- Authors: John Lazarsfeld, Georgios Piliouras, Ryann Sim, Stratis Skoulakis,
- Abstract summary: We show for the first time that similar, optimal rates are achievable without regularization.<n>We prove for two-strategy games that Optimistic Fictitious Play (using any tiebreaking rule) obtains only constant regret.
- Score: 35.34620729951821
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper studies the optimistic variant of Fictitious Play for learning in two-player zero-sum games. While it is known that Optimistic FTRL -- a regularized algorithm with a bounded stepsize parameter -- obtains constant regret in this setting, we show for the first time that similar, optimal rates are also achievable without regularization: we prove for two-strategy games that Optimistic Fictitious Play (using any tiebreaking rule) obtains only constant regret, providing surprising new evidence on the ability of non-no-regret algorithms for fast learning in games. Our proof technique leverages a geometric view of Optimistic Fictitious Play in the dual space of payoff vectors, where we show a certain energy function of the iterates remains bounded over time. Additionally, we also prove a regret lower bound of $\Omega(\sqrt{T})$ for Alternating Fictitious Play. In the unregularized regime, this separates the ability of optimism and alternation in achieving $o(\sqrt{T})$ regret.
Related papers
- Fast and Furious Symmetric Learning in Zero-Sum Games: Gradient Descent as Fictitious Play [39.1873150563222]
This paper investigates the sublinear regret guarantees of two non-no-regret algorithms in zero-sum games.<n>Fictitious Play and Online Gradient Descent exhibit instability and linear regret due to no regularization.<n>We prove that Fictitious Play with any tiebreaking rule has $O(sqrtT)$ regret, establishing a new class of games for which Karlin's Fictitious Play conjecture holds.
arXiv Detail & Related papers (2025-06-16T04:07:15Z) - Cautious Optimism: A Meta-Algorithm for Near-Constant Regret in General Games [46.462843198107144]
We show that no-regret learning acceleration through adaptive pacing of the learners is not an isolated phenomenon.<n>We introduce emphCautious Optimism, a framework for substantially faster regularized learning in general games.
arXiv Detail & Related papers (2025-06-05T13:13:48Z) - Achieving Better Regret against Strategic Adversaries [15.51709428653595]
We study online learning problems in which the learner has extra knowledge about the adversary's behaviour.
We propose two new online learning algorithms, Accurate Follow the Regularized Leader (AFTRL) and Prod-Best Response (Prod-BR)
AFTRL achieves $O(1)$ external regret or $O(1)$ emphforward regret against no-external regret adversary.
arXiv Detail & Related papers (2023-02-13T19:34:36Z) - Doubly Optimal No-Regret Learning in Monotone Games [10.760195409078591]
We propose the first doubly optimal no-regret learning algorithm for smooth monotone games.
Our algorithm achieves both (i) the optimal $O(sqrtT)$ regret in the adversarial setting under smooth and convex loss functions and (ii) the optimal $O(frac1T)$ last-iterate convergence rate to a Nash equilibrium.
arXiv Detail & Related papers (2023-01-30T17:55:53Z) - Provably Efficient Fictitious Play Policy Optimization for Zero-Sum
Markov Games with Structured Transitions [145.54544979467872]
We propose and analyze new fictitious play policy optimization algorithms for zero-sum Markov games with structured but unknown transitions.
We prove tight $widetildemathcalO(sqrtK)$ regret bounds after $K$ episodes in a two-agent competitive game scenario.
Our algorithms feature a combination of Upper Confidence Bound (UCB)-type optimism and fictitious play under the scope of simultaneous policy optimization.
arXiv Detail & Related papers (2022-07-25T18:29:16Z) - Near-Optimal No-Regret Learning for General Convex Games [121.50979258049135]
We show that regret can be obtained for general convex and compact strategy sets.
Our dynamics are on an instantiation of optimistic follow-the-regularized-bounds over an appropriately emphlifted space.
Even in those special cases where prior results apply, our algorithm improves over the state-of-the-art regret.
arXiv Detail & Related papers (2022-06-17T12:58:58Z) - Near-Optimal No-Regret Learning for Correlated Equilibria in
Multi-Player General-Sum Games [104.74734408204749]
We show that if all agents in a multi-player general-sum normal-form game employ Optimistic Multiplicative Weights Update (OMWU), the external regret of every player is $O(textrmpolylog(T))$ after $T$ repetitions of the game.
We extend their result from external regret to internal regret and swap regret, thereby establishing uncoupled learning dynamics that converge to an approximate correlated equilibrium.
arXiv Detail & Related papers (2021-11-11T01:19:53Z) - Near-Optimal No-Regret Learning in General Games [31.293412884145066]
We show that Optimistic Hedge attains $rm poly(log T)$ regret in multi-player general-sum games.
A corollary of our bound is that Optimistic Hedge converges to coarse correlated equilibrium in general games at a rate of $tildeOleft(frac 1Tright)$.
arXiv Detail & Related papers (2021-08-16T06:53:02Z) - Almost Optimal Algorithms for Two-player Markov Games with Linear
Function Approximation [92.99933928528797]
We study reinforcement learning for two-player zero-sum Markov games with simultaneous moves.
We propose an algorithm Nash-UCRL-VTR based on the principle "Optimism-in-Face-of-Uncertainty"
We show that Nash-UCRL-VTR can provably achieve an $tildeO(dHsqrtT)$ regret, where $d$ is the linear function dimension.
arXiv Detail & Related papers (2021-02-15T09:09:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.