A unified stochastic approximation framework for learning in games
- URL: http://arxiv.org/abs/2206.03922v2
- Date: Mon, 3 Jul 2023 14:51:31 GMT
- Title: A unified stochastic approximation framework for learning in games
- Authors: Panayotis Mertikopoulos and Ya-Ping Hsieh and Volkan Cevher
- Abstract summary: We develop a flexible approximation framework for analyzing the long-run behavior of learning in games (both continuous and finite)
The proposed analysis template incorporates a wide array of popular learning algorithms, including gradient-based methods, exponential/multiplicative weights for learning in finite games, optimistic and bandit variants of the above, etc.
- Score: 82.74514886461257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a flexible stochastic approximation framework for analyzing the
long-run behavior of learning in games (both continuous and finite). The
proposed analysis template incorporates a wide array of popular learning
algorithms, including gradient-based methods, the exponential/multiplicative
weights algorithm for learning in finite games, optimistic and bandit variants
of the above, etc. In addition to providing an integrated view of these
algorithms, our framework further allows us to obtain several new convergence
results, both asymptotic and in finite time, in both continuous and finite
games. Specifically, we provide a range of criteria for identifying classes of
Nash equilibria and sets of action profiles that are attracting with high
probability, and we also introduce the notion of coherence, a game-theoretic
property that includes strict and sharp equilibria, and which leads to
convergence in finite time. Importantly, our analysis applies to both
oracle-based and bandit, payoff-based methods - that is, when players only
observe their realized payoffs.
Related papers
- Graphon Mean Field Games with a Representative Player: Analysis and Learning Algorithm [14.647775453098513]
We prove the existence and uniqueness of the graphon equilibrium with mild assumptions, and show that this equilibrium can be used to construct an approximate solution for finite player game on networks.
An online oracle-free learning algorithm is developed to solve the equilibrium numerically, and sample complexity analysis is provided for its convergence.
arXiv Detail & Related papers (2024-05-08T04:44:16Z) - Independent Learning in Constrained Markov Potential Games [19.083595175045073]
Constrained Markov games offer a formal framework for modeling multi-agent reinforcement learning problems.
We propose an independent policy gradient algorithm for learning approximate constrained Nash equilibria.
arXiv Detail & Related papers (2024-02-27T20:57:35Z) - The equivalence of dynamic and strategic stability under regularized
learning in games [33.74394172275373]
We examine the long-run behavior of regularized, no-regret learning in finite games.
We obtain an equivalence between strategic and dynamic stability.
We show that methods based on entropic regularization converge at a geometric rate.
arXiv Detail & Related papers (2023-11-04T14:07:33Z) - On the Convergence of No-Regret Learning Dynamics in Time-Varying Games [89.96815099996132]
We characterize the convergence of optimistic gradient descent (OGD) in time-varying games.
Our framework yields sharp convergence bounds for the equilibrium gap of OGD in zero-sum games.
We also provide new insights on dynamic regret guarantees in static games.
arXiv Detail & Related papers (2023-01-26T17:25:45Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Learning Two-Player Mixture Markov Games: Kernel Function Approximation
and Correlated Equilibrium [157.0902680672422]
We consider learning Nash equilibria in two-player zero-sum Markov Games with nonlinear function approximation.
We propose a novel online learning algorithm to find a Nash equilibrium by minimizing the duality gap.
arXiv Detail & Related papers (2022-08-10T14:21:54Z) - Fictitious Play for Mean Field Games: Continuous Time Analysis and
Applications [36.76207130435722]
We first present a theoretical convergence analysis of the continuous time Fictitious Play process and prove that the induced exploitability decreases at a rate $O(frac1t)$.
We provide hereby for the first time converging learning dynamics for Mean Field Games in the presence of common noise.
arXiv Detail & Related papers (2020-07-05T23:31:47Z) - Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic
Perspectives [97.16266088683061]
The article rigorously establishes why symplectic discretization schemes are important for momentum-based optimization algorithms.
It provides a characterization of algorithms that exhibit accelerated convergence.
arXiv Detail & Related papers (2020-02-28T00:32:47Z) - Finite-Time Last-Iterate Convergence for Multi-Agent Learning in Games [116.0771177871705]
We characterize the finite-time last-iterate convergence rate for joint OGD learning on $lambda$-cocoercive games.
We show, via a novel double-stopping time technique, that this adaptive algorithm achieves same finite-time last-iterate convergence rate as non-adaptive counterpart.
arXiv Detail & Related papers (2020-02-23T01:46:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.