Evolutionary Game-Theoretical Analysis for General Multiplayer
Asymmetric Games
- URL: http://arxiv.org/abs/2206.11114v1
- Date: Wed, 22 Jun 2022 14:06:23 GMT
- Title: Evolutionary Game-Theoretical Analysis for General Multiplayer
Asymmetric Games
- Authors: Xinyu Zhang, Peng Peng, Yushan Zhou, Haifeng Wang, Wenxin Li
- Abstract summary: We fill the gap between payoff table and dynamic analysis without any inaccuracy.
We compare our method with the state-of-the-art in some classic games.
- Score: 22.753799819424785
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Evolutionary game theory has been a successful tool to combine classical game
theory with learning-dynamical descriptions in multiagent systems. Provided
some symmetric structures of interacting players, many studies have been
focused on using a simplified heuristic payoff table as input to analyse the
dynamics of interactions. Nevertheless, even for the state-of-the-art method,
there are two limits. First, there is inaccuracy when analysing the simplified
payoff table. Second, no existing work is able to deal with 2-population
multiplayer asymmetric games. In this paper, we fill the gap between heuristic
payoff table and dynamic analysis without any inaccuracy. In addition, we
propose a general framework for $m$ versus $n$ 2-population multiplayer
asymmetric games. Then, we compare our method with the state-of-the-art in some
classic games. Finally, to illustrate our method, we perform empirical
game-theoretical analysis on Wolfpack as well as StarCraft II, both of which
involve complex multiagent interactions.
Related papers
- Exploiting Approximate Symmetry for Efficient Multi-Agent Reinforcement Learning [19.543995541149897]
We provide a methodology to extend any finite-player, possibly asymmetric, game to an "induced MFG"
First, we prove that $N$-player dynamic games can be symmetrized and smoothly extended to the infinite-player continuum via explicit Kirszbraun extensions.
For certain games satisfying monotonicity, we prove a sample complexity of $widetildemathcalO(varepsilon-6)$ for the $N$-agent game to learn an $varepsilon$-Nash up to symmetrization bias.
arXiv Detail & Related papers (2024-08-27T16:11:20Z) - Imperfect-Recall Games: Equilibrium Concepts and Their Complexity [74.01381499760288]
We investigate optimal decision making under imperfect recall, that is, when an agent forgets information it once held before.
In the framework of extensive-form games with imperfect recall, we analyze the computational complexities of finding equilibria in multiplayer settings.
arXiv Detail & Related papers (2024-06-23T00:27:28Z) - Securing Equal Share: A Principled Approach for Learning Multiplayer Symmetric Games [21.168085154982712]
equilibria in multiplayer games are neither unique nor non-exploitable.
This paper takes an initial step towards addressing these challenges by focusing on the natural objective of equal share.
We design a series of efficient algorithms, inspired by no-regret learning, that provably attain approximate equal share across various settings.
arXiv Detail & Related papers (2024-06-06T15:59:17Z) - Optimistic Policy Gradient in Multi-Player Markov Games with a Single
Controller: Convergence Beyond the Minty Property [89.96815099996132]
We develop a new framework to characterize optimistic policy gradient methods in multi-player games with a single controller.
Our approach relies on a natural generalization of the classical Minty property that we introduce, which we anticipate to have further applications beyond Markov games.
arXiv Detail & Related papers (2023-12-19T11:34:10Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Learning Correlated Equilibria in Mean-Field Games [62.14589406821103]
We develop the concepts of Mean-Field correlated and coarse-correlated equilibria.
We show that they can be efficiently learnt in emphall games, without requiring any additional assumption on the structure of the game.
arXiv Detail & Related papers (2022-08-22T08:31:46Z) - Towards convergence to Nash equilibria in two-team zero-sum games [17.4461045395989]
Two-team zero-sum games are defined as multi-player games where players are split into two competing sets of agents.
We focus on the solution concept of Nash equilibria (NE)
We show that computing NE for this class of games is $textithard$ for the complexity class $mathrm$.
arXiv Detail & Related papers (2021-11-07T21:15:35Z) - Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games [78.65798135008419]
It remains vastly open how to learn the Stackelberg equilibrium in general-sum games efficiently from samples.
This paper initiates the theoretical study of sample-efficient learning of the Stackelberg equilibrium in two-player turn-based general-sum games.
arXiv Detail & Related papers (2021-02-23T05:11:07Z) - Evolutionary Game Theory Squared: Evolving Agents in Endogenously
Evolving Zero-Sum Games [27.510231246176033]
This paper introduces and analyze a class of competitive settings where both the agents and the games they play evolve strategically over time.
Populations of agents compete against each other in a zero-sum competition that itself evolves adversarially to the current population mixture.
Remarkably, despite the chaotic coevolution of agents and games, we prove that the system exhibits a number of regularities.
arXiv Detail & Related papers (2020-12-15T15:54:46Z) - Learning to Play No-Press Diplomacy with Best Response Policy Iteration [31.367850729299665]
We apply deep reinforcement learning methods to Diplomacy, a 7-player board game.
We show that our agents convincingly outperform the previous state-of-the-art, and game theoretic equilibrium analysis shows that the new process yields consistent improvements.
arXiv Detail & Related papers (2020-06-08T14:33:31Z) - Chaos, Extremism and Optimism: Volume Analysis of Learning in Games [55.24050445142637]
We present volume analyses of Multiplicative Weights Updates (MWU) and Optimistic Multiplicative Weights Updates (OMWU) in zero-sum as well as coordination games.
We show that OMWU contracts volume, providing an alternative understanding for its known convergent behavior.
We also prove a no-free-lunch type of theorem, in the sense that when examining coordination games the roles are reversed: OMWU expands volume exponentially fast, whereas MWU contracts.
arXiv Detail & Related papers (2020-05-28T13:47:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.