Empirical Analysis of Fictitious Play for Nash Equilibrium Computation in Multiplayer Games
- URL: http://arxiv.org/abs/2001.11165v9
- Date: Wed, 26 Jun 2024 22:27:36 GMT
- Title: Empirical Analysis of Fictitious Play for Nash Equilibrium Computation in Multiplayer Games
- Authors: Sam Ganzfried,
- Abstract summary: fictitious play is guaranteed to converge to Nash equilibrium in certain game classes, such as two-player zero-sum games.
We show that fictitious play in fact leads to improved Nash equilibrium approximation over a variety of game classes and sizes.
- Score: 0.4895118383237099
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While fictitious play is guaranteed to converge to Nash equilibrium in certain game classes, such as two-player zero-sum games, it is not guaranteed to converge in non-zero-sum and multiplayer games. We show that fictitious play in fact leads to improved Nash equilibrium approximation over a variety of game classes and sizes than (counterfactual) regret minimization, which has recently produced superhuman play for multiplayer poker. We also show that when fictitious play is run several times using random initializations it is able to solve several known challenge problems in which the standard version is known to not converge, including Shapley's classic counterexample. These provide some of the first positive results for fictitious play in these settings, despite the fact that worst-case theoretical results are negative.
Related papers
- Securing Equal Share: A Principled Approach for Learning Multiplayer Symmetric Games [21.168085154982712]
equilibria in multiplayer games are neither unique nor non-exploitable.
This paper takes an initial step towards addressing these challenges by focusing on the natural objective of equal share.
We design a series of efficient algorithms, inspired by no-regret learning, that provably attain approximate equal share across various settings.
arXiv Detail & Related papers (2024-06-06T15:59:17Z) - Optimistic Policy Gradient in Multi-Player Markov Games with a Single
Controller: Convergence Beyond the Minty Property [89.96815099996132]
We develop a new framework to characterize optimistic policy gradient methods in multi-player games with a single controller.
Our approach relies on a natural generalization of the classical Minty property that we introduce, which we anticipate to have further applications beyond Markov games.
arXiv Detail & Related papers (2023-12-19T11:34:10Z) - On the Convergence of No-Regret Learning Dynamics in Time-Varying Games [89.96815099996132]
We characterize the convergence of optimistic gradient descent (OGD) in time-varying games.
Our framework yields sharp convergence bounds for the equilibrium gap of OGD in zero-sum games.
We also provide new insights on dynamic regret guarantees in static games.
arXiv Detail & Related papers (2023-01-26T17:25:45Z) - Global Nash Equilibrium in Non-convex Multi-player Game: Theory and
Algorithms [66.8634598612777]
We show that Nash equilibrium (NE) is acceptable to all players in a multi-player game.
We also show that no one can benefit unilaterally from the general theory step by step.
arXiv Detail & Related papers (2023-01-19T11:36:50Z) - Learning Correlated Equilibria in Mean-Field Games [62.14589406821103]
We develop the concepts of Mean-Field correlated and coarse-correlated equilibria.
We show that they can be efficiently learnt in emphall games, without requiring any additional assumption on the structure of the game.
arXiv Detail & Related papers (2022-08-22T08:31:46Z) - No-Regret Learning in Time-Varying Zero-Sum Games [99.86860277006318]
Learning from repeated play in a fixed zero-sum game is a classic problem in game theory and online learning.
We develop a single parameter-free algorithm that simultaneously enjoys favorable guarantees under three performance measures.
Our algorithm is based on a two-layer structure with a meta-algorithm learning over a group of black-box base-learners satisfying a certain property.
arXiv Detail & Related papers (2022-01-30T06:10:04Z) - Towards convergence to Nash equilibria in two-team zero-sum games [17.4461045395989]
Two-team zero-sum games are defined as multi-player games where players are split into two competing sets of agents.
We focus on the solution concept of Nash equilibria (NE)
We show that computing NE for this class of games is $textithard$ for the complexity class $mathrm$.
arXiv Detail & Related papers (2021-11-07T21:15:35Z) - Mixed Nash Equilibria in the Adversarial Examples Game [18.181826693937776]
This paper tackles the problem of adversarial examples from a game theoretic point of view.
We study the open question of the existence of mixed Nash equilibria in the zero-sum game formed by the attacker and the classifier.
arXiv Detail & Related papers (2021-02-13T11:47:20Z) - No-regret learning and mixed Nash equilibria: They do not mix [64.37511607254115]
We study the dynamics of "follow-the-regularized-leader" (FTRL)
We show that any Nash equilibrium which is not strict cannot be stable and attracting under FTRL.
This result has significant implications for predicting the outcome of a learning process.
arXiv Detail & Related papers (2020-10-19T13:49:06Z) - Exponential Convergence of Gradient Methods in Concave Network Zero-sum
Games [6.129776019898013]
We study the computation of Nash equilibrium in concave network zero-sum games (NZSGs)
We show that various game theoretic properties of convex-concave two-player zero-sum games are preserved in this generalization.
arXiv Detail & Related papers (2020-07-10T16:56:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.