Computing Nash Equilibria in Multiplayer DAG-Structured Stochastic Games
with Persistent Imperfect Information
- URL: http://arxiv.org/abs/2010.13860v2
- Date: Thu, 18 Feb 2021 02:38:24 GMT
- Title: Computing Nash Equilibria in Multiplayer DAG-Structured Stochastic Games
with Persistent Imperfect Information
- Authors: Sam Ganzfried
- Abstract summary: We present an algorithm for approximating Nash equilibrium in multiplayer general-suming games with persistent imperfect information.
Using a new procedure, we are able to demonstrate that our algorithm computes a strategy that closely approximates Nash equilibrium in this game.
- Score: 1.7132914341329848
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many important real-world settings contain multiple players interacting over
an unknown duration with probabilistic state transitions, and are naturally
modeled as stochastic games. Prior research on algorithms for stochastic games
has focused on two-player zero-sum games, games with perfect information, and
games with imperfect-information that is local and does not extend between game
states. We present an algorithm for approximating Nash equilibrium in
multiplayer general-sum stochastic games with persistent imperfect information
that extends throughout game play. We experiment on a 4-player
imperfect-information naval strategic planning scenario. Using a new procedure,
we are able to demonstrate that our algorithm computes a strategy that closely
approximates Nash equilibrium in this game.
Related papers
- Neural Population Learning beyond Symmetric Zero-sum Games [52.20454809055356]
We introduce NeuPL-JPSRO, a neural population learning algorithm that benefits from transfer learning of skills and converges to a Coarse Correlated (CCE) of the game.
Our work shows that equilibrium convergent population learning can be implemented at scale and in generality.
arXiv Detail & Related papers (2024-01-10T12:56:24Z) - Hardness of Independent Learning and Sparse Equilibrium Computation in
Markov Games [70.19141208203227]
We consider the problem of decentralized multi-agent reinforcement learning in Markov games.
We show that no algorithm attains no-regret in general-sum games when executed independently by all players.
We show that our lower bounds hold even for seemingly easier setting in which all agents are controlled by a centralized algorithm.
arXiv Detail & Related papers (2023-03-22T03:28:12Z) - Abstracting Imperfect Information Away from Two-Player Zero-Sum Games [85.27865680662973]
Nayyar et al. (2013) showed that imperfect information can be abstracted away from common-payoff games by having players publicly announce their policies as they play.
This work shows that certain regularized equilibria do not possess the aforementioned non-correspondence problem.
Because these regularized equilibria can be made arbitrarily close to Nash equilibria, our result opens the door to a new perspective to solving two-player zero-sum games.
arXiv Detail & Related papers (2023-01-22T16:54:06Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Learning Correlated Equilibria in Mean-Field Games [62.14589406821103]
We develop the concepts of Mean-Field correlated and coarse-correlated equilibria.
We show that they can be efficiently learnt in emphall games, without requiring any additional assumption on the structure of the game.
arXiv Detail & Related papers (2022-08-22T08:31:46Z) - Multiplayer Performative Prediction: Learning in Decision-Dependent
Games [18.386569111954213]
This paper formulates a new game theoretic framework for multi-player performative prediction.
We focus on two distinct solution concepts, namely (i) performatively stable equilibria and (ii) Nash equilibria of the game.
We show that under mild assumptions, the performatively stable equilibria can be found efficiently by a variety of algorithms.
arXiv Detail & Related papers (2022-01-10T15:31:10Z) - Towards convergence to Nash equilibria in two-team zero-sum games [17.4461045395989]
Two-team zero-sum games are defined as multi-player games where players are split into two competing sets of agents.
We focus on the solution concept of Nash equilibria (NE)
We show that computing NE for this class of games is $textithard$ for the complexity class $mathrm$.
arXiv Detail & Related papers (2021-11-07T21:15:35Z) - Algorithm for Computing Approximate Nash Equilibrium in Continuous Games
with Application to Continuous Blotto [1.7132914341329848]
We present a new algorithm for approximating Nash equilibrium strategies in continuous games.
In addition to two-player zero-sum games, our algorithm also applies to multiplayer games and games with imperfect information.
arXiv Detail & Related papers (2020-06-12T19:53:18Z) - From Poincar\'e Recurrence to Convergence in Imperfect Information
Games: Finding Equilibrium via Regularization [49.368421783733815]
We show how adapting the reward can give strong convergence guarantees in monotone games.
We also show how this reward adaptation technique can be leveraged to build algorithms that converge exactly to the Nash equilibrium.
arXiv Detail & Related papers (2020-02-19T21:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.