Observable Perfect Equilibrium
- URL: http://arxiv.org/abs/2210.16506v9
- Date: Sun, 28 Apr 2024 00:35:34 GMT
- Title: Observable Perfect Equilibrium
- Authors: Sam Ganzfried,
- Abstract summary: We define a new equilibrium refinement concept for extensive-form games called observable perfect equilibrium.
We prove that observable perfect equilibrium is always guaranteed to exist.
We expect observable perfect equilibrium to be a useful equilibrium refinement concept for modeling many important imperfect-information games of interest in artificial intelligence.
- Score: 0.4895118383237099
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Nash equilibrium has emerged as the central game-theoretic solution concept, many important games contain several Nash equilibria and we must determine how to select between them in order to create real strategic agents. Several Nash equilibrium refinement concepts have been proposed and studied for sequential imperfect-information games, the most prominent being trembling-hand perfect equilibrium, quasi-perfect equilibrium, and recently one-sided quasi-perfect equilibrium. These concepts are robust to certain arbitrarily small mistakes, and are guaranteed to always exist; however, we argue that neither of these is the correct concept for developing strong agents in sequential games of imperfect information. We define a new equilibrium refinement concept for extensive-form games called observable perfect equilibrium in which the solution is robust over trembles in publicly-observable action probabilities (not necessarily over all action probabilities that may not be observable by opposing players). Observable perfect equilibrium correctly captures the assumption that the opponent is playing as rationally as possible given mistakes that have been observed (while previous solution concepts do not). We prove that observable perfect equilibrium is always guaranteed to exist, and demonstrate that it leads to a different solution than the prior extensive-form refinements in no-limit poker. We expect observable perfect equilibrium to be a useful equilibrium refinement concept for modeling many important imperfect-information games of interest in artificial intelligence.
Related papers
- Bayes correlated equilibria and no-regret dynamics [9.89901717499058]
This paper explores equilibrium concepts for Bayesian games, which are fundamental models of games with incomplete information.
We focus on communication equilibria, which can be realized by a mediator who gathers each player's private information and then sends correlated recommendations to the players.
We present an efficient algorithm for minimizing untruthful swap regret with a sublinear upper bound, which we prove to be tight up to a multiplicative constant.
arXiv Detail & Related papers (2023-04-11T06:22:51Z) - Differentiable Arbitrating in Zero-sum Markov Games [59.62061049680365]
We study how to perturb the reward in a zero-sum Markov game with two players to induce a desirable Nash equilibrium, namely arbitrating.
The lower level requires solving the Nash equilibrium under a given reward function, which makes the overall problem challenging to optimize in an end-to-end way.
We propose a backpropagation scheme that differentiates through the Nash equilibrium, which provides the gradient feedback for the upper level.
arXiv Detail & Related papers (2023-02-20T16:05:04Z) - Are Equivariant Equilibrium Approximators Beneficial? [3.947165805405381]
We theoretically characterize benefits and limitations of equivariant equilibrium approximators.
For the benefits, we show that they enjoy better generalizability than general ones and can achieve better approximations when the payoff distribution is permutation-invariant.
For the limitations, we discuss their drawbacks in terms of equilibrium selection and social welfare.
arXiv Detail & Related papers (2023-01-27T01:11:41Z) - Abstracting Imperfect Information Away from Two-Player Zero-Sum Games [85.27865680662973]
Nayyar et al. (2013) showed that imperfect information can be abstracted away from common-payoff games by having players publicly announce their policies as they play.
This work shows that certain regularized equilibria do not possess the aforementioned non-correspondence problem.
Because these regularized equilibria can be made arbitrarily close to Nash equilibria, our result opens the door to a new perspective to solving two-player zero-sum games.
arXiv Detail & Related papers (2023-01-22T16:54:06Z) - Game-Theoretical Perspectives on Active Equilibria: A Preferred Solution
Concept over Nash Equilibria [61.093297204685264]
An effective approach in multiagent reinforcement learning is to consider the learning process of agents and influence their future policies.
This new solution concept is general such that standard solution concepts, such as a Nash equilibrium, are special cases of active equilibria.
We analyze active equilibria from a game-theoretic perspective by closely studying examples where Nash equilibria are known.
arXiv Detail & Related papers (2022-10-28T14:45:39Z) - How Bad is Selfish Driving? Bounding the Inefficiency of Equilibria in
Urban Driving Games [64.71476526716668]
We study the (in)efficiency of any equilibrium players might agree to play.
We obtain guarantees that refine existing bounds on the Price of Anarchy.
Although the obtained guarantees concern open-loop trajectories, we observe efficient equilibria even when agents employ closed-loop policies.
arXiv Detail & Related papers (2022-10-24T09:32:40Z) - Nash Equilibria and Pitfalls of Adversarial Training in Adversarial
Robustness Games [51.90475640044073]
We study adversarial training as an alternating best-response strategy in a 2-player zero-sum game.
On the other hand, a unique pure Nash equilibrium of the game exists and is provably robust.
arXiv Detail & Related papers (2022-10-23T03:21:01Z) - Turbocharging Solution Concepts: Solving NEs, CEs and CCEs with Neural
Equilibrium Solvers [22.85979978964773]
Solution concepts such as Nash Equilibria, Correlated Equilibria, and Coarse Correlated Equilibria are useful components for many multiagent machine learning algorithms.
We introduce the Neural Equilibrium solver which utilizes a special equivariant neural network architecture to approximately solve the space of all games of fixed shape, buying speed and determinism.
arXiv Detail & Related papers (2022-10-17T17:00:31Z) - Safe Equilibrium [1.7132914341329848]
The standard game-theoretic solution concept, Nash equilibrium, assumes that all players behave rationally.
We propose a new solution concept called safe equilibrium that models opponents as behaving rationally with a specified probability.
We prove that a safe equilibrium exists in all strategic-form games, and prove that its computation is PPAD-hard.
arXiv Detail & Related papers (2022-01-12T01:45:51Z) - Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games [78.65798135008419]
It remains vastly open how to learn the Stackelberg equilibrium in general-sum games efficiently from samples.
This paper initiates the theoretical study of sample-efficient learning of the Stackelberg equilibrium in two-player turn-based general-sum games.
arXiv Detail & Related papers (2021-02-23T05:11:07Z) - Survival of the strictest: Stable and unstable equilibria under
regularized learning with partial information [32.384868685390906]
We examine the Nash equilibrium convergence properties of no-regret learning in general N-player games.
We establish a comprehensive equivalence between the stability of a Nash equilibrium and its support.
It provides a clear refinement criterion for the prediction of the day-to-day behavior of no-regret learning in games.
arXiv Detail & Related papers (2021-01-12T18:55:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.