On the Impossibility of Convergence of Mixed Strategies with No Regret
Learning
- URL: http://arxiv.org/abs/2012.02125v1
- Date: Thu, 3 Dec 2020 18:02:40 GMT
- Title: On the Impossibility of Convergence of Mixed Strategies with No Regret
Learning
- Authors: Vidya Muthukumar, Soham Phade, Anant Sahai
- Abstract summary: We study convergence properties of the mixed strategies that result from a general class of optimal no regret learning strategies.
We consider the class of strategies whose information set at each step is the empirical average of the opponent's realized play.
- Score: 10.515544361834241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study convergence properties of the mixed strategies that result from a
general class of optimal no regret learning strategies in a repeated game
setting where the stage game is any 2 by 2 competitive game (i.e. game for
which all the Nash equilibria (NE) of the game are completely mixed). We
consider the class of strategies whose information set at each step is the
empirical average of the opponent's realized play (and the step number), that
we call mean based strategies. We first show that there does not exist any
optimal no regret, mean based strategy for player 1 that would result in the
convergence of her mixed strategies (in probability) against an opponent that
plays his Nash equilibrium mixed strategy at each step. Next, we show that this
last iterate divergence necessarily occurs if player 2 uses any adaptive
strategy with a minimal randomness property. This property is satisfied, for
example, by any fixed sequence of mixed strategies for player 2 that converges
to NE. We conjecture that this property holds when both players use optimal no
regret learning strategies against each other, leading to the divergence of the
mixed strategies with a positive probability. Finally, we show that variants of
mean based strategies using recency bias, which have yielded last iterate
convergence in deterministic min max optimization, continue to lead to this
last iterate divergence. This demonstrates a crucial difference in outcomes
between using the opponent's mixtures and realizations to make strategy
updates.
Related papers
- Paths to Equilibrium in Games [6.812247730094933]
We study sequences of strategies satisfying a pairwise constraint inspired by policy updating in reinforcement learning.
Our analysis reveals a counterintuitive insight that reward deteriorating strategic updates are key to driving play to equilibrium along a satisficing path.
arXiv Detail & Related papers (2024-03-26T19:58:39Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - ApproxED: Approximate exploitability descent via learned best responses [61.17702187957206]
We study the problem of finding an approximate Nash equilibrium of games with continuous action sets.
We propose two new methods that minimize an approximation of exploitability with respect to the strategy profile.
arXiv Detail & Related papers (2023-01-20T23:55:30Z) - Opponent Modeling in Multiplayer Imperfect-Information Games [1.024113475677323]
We present an approach for opponent modeling in multiplayer imperfect-information games.
We run experiments against a variety of real opponents and exact Nash equilibrium strategies in three-player Kuhn poker.
Our algorithm significantly outperforms all of the agents, including the exact Nash equilibrium strategies.
arXiv Detail & Related papers (2022-12-12T16:48:53Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Provably Efficient Fictitious Play Policy Optimization for Zero-Sum
Markov Games with Structured Transitions [145.54544979467872]
We propose and analyze new fictitious play policy optimization algorithms for zero-sum Markov games with structured but unknown transitions.
We prove tight $widetildemathcalO(sqrtK)$ regret bounds after $K$ episodes in a two-agent competitive game scenario.
Our algorithms feature a combination of Upper Confidence Bound (UCB)-type optimism and fictitious play under the scope of simultaneous policy optimization.
arXiv Detail & Related papers (2022-07-25T18:29:16Z) - Mixed Strategies for Security Games with General Defending Requirements [37.02840909260615]
The Stackelberg security game is played between a defender and an attacker, where the defender needs to allocate a limited amount of resources to multiple targets.
We propose an efficient close-to-optimal Patching algorithm that computes mixed strategies that use only few pure strategies.
arXiv Detail & Related papers (2022-04-26T08:56:39Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.