Evolutionary Game-Theoretical Analysis for General Multiplayer
Asymmetric Games
- URL: http://arxiv.org/abs/2206.11114v1
- Date: Wed, 22 Jun 2022 14:06:23 GMT
- Title: Evolutionary Game-Theoretical Analysis for General Multiplayer
Asymmetric Games
- Authors: Xinyu Zhang, Peng Peng, Yushan Zhou, Haifeng Wang, Wenxin Li
- Abstract summary: We fill the gap between payoff table and dynamic analysis without any inaccuracy.
We compare our method with the state-of-the-art in some classic games.
- Score: 22.753799819424785
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Evolutionary game theory has been a successful tool to combine classical game
theory with learning-dynamical descriptions in multiagent systems. Provided
some symmetric structures of interacting players, many studies have been
focused on using a simplified heuristic payoff table as input to analyse the
dynamics of interactions. Nevertheless, even for the state-of-the-art method,
there are two limits. First, there is inaccuracy when analysing the simplified
payoff table. Second, no existing work is able to deal with 2-population
multiplayer asymmetric games. In this paper, we fill the gap between heuristic
payoff table and dynamic analysis without any inaccuracy. In addition, we
propose a general framework for $m$ versus $n$ 2-population multiplayer
asymmetric games. Then, we compare our method with the state-of-the-art in some
classic games. Finally, to illustrate our method, we perform empirical
game-theoretical analysis on Wolfpack as well as StarCraft II, both of which
involve complex multiagent interactions.
Related papers
- Imperfect-Recall Games: Equilibrium Concepts and Their Complexity [74.01381499760288]
We investigate optimal decision making under imperfect recall, that is, when an agent forgets information it once held before.
In the framework of extensive-form games with imperfect recall, we analyze the computational complexities of finding equilibria in multiplayer settings.
arXiv Detail & Related papers (2024-06-23T00:27:28Z) - A Deep Learning Method for Optimal Investment Under Relative Performance Criteria Among Heterogeneous Agents [2.330509865741341]
Graphon games have been introduced to study games with many players who interact through a weighted graph of interaction.
We focus on a graphon game for optimal investment under relative performance criteria, and we propose a deep learning method.
arXiv Detail & Related papers (2024-02-12T01:40:31Z) - Optimistic Policy Gradient in Multi-Player Markov Games with a Single
Controller: Convergence Beyond the Minty Property [89.96815099996132]
We develop a new framework to characterize optimistic policy gradient methods in multi-player games with a single controller.
Our approach relies on a natural generalization of the classical Minty property that we introduce, which we anticipate to have further applications beyond Markov games.
arXiv Detail & Related papers (2023-12-19T11:34:10Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Learning Correlated Equilibria in Mean-Field Games [62.14589406821103]
We develop the concepts of Mean-Field correlated and coarse-correlated equilibria.
We show that they can be efficiently learnt in emphall games, without requiring any additional assumption on the structure of the game.
arXiv Detail & Related papers (2022-08-22T08:31:46Z) - Independent Learning in Stochastic Games [16.505046191280634]
We present the model of games for multi-agent learning in dynamic environments.
We focus on the development of simple and independent learning dynamics for games.
We present our recently proposed simple and independent learning dynamics that guarantee convergence in zero-sum games.
arXiv Detail & Related papers (2021-11-23T09:27:20Z) - Towards convergence to Nash equilibria in two-team zero-sum games [17.4461045395989]
Two-team zero-sum games are defined as multi-player games where players are split into two competing sets of agents.
We focus on the solution concept of Nash equilibria (NE)
We show that computing NE for this class of games is $textithard$ for the complexity class $mathrm$.
arXiv Detail & Related papers (2021-11-07T21:15:35Z) - Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games [78.65798135008419]
It remains vastly open how to learn the Stackelberg equilibrium in general-sum games efficiently from samples.
This paper initiates the theoretical study of sample-efficient learning of the Stackelberg equilibrium in two-player turn-based general-sum games.
arXiv Detail & Related papers (2021-02-23T05:11:07Z) - Evolutionary Game Theory Squared: Evolving Agents in Endogenously
Evolving Zero-Sum Games [27.510231246176033]
This paper introduces and analyze a class of competitive settings where both the agents and the games they play evolve strategically over time.
Populations of agents compete against each other in a zero-sum competition that itself evolves adversarially to the current population mixture.
Remarkably, despite the chaotic coevolution of agents and games, we prove that the system exhibits a number of regularities.
arXiv Detail & Related papers (2020-12-15T15:54:46Z) - Learning to Play No-Press Diplomacy with Best Response Policy Iteration [31.367850729299665]
We apply deep reinforcement learning methods to Diplomacy, a 7-player board game.
We show that our agents convincingly outperform the previous state-of-the-art, and game theoretic equilibrium analysis shows that the new process yields consistent improvements.
arXiv Detail & Related papers (2020-06-08T14:33:31Z) - Chaos, Extremism and Optimism: Volume Analysis of Learning in Games [55.24050445142637]
We present volume analyses of Multiplicative Weights Updates (MWU) and Optimistic Multiplicative Weights Updates (OMWU) in zero-sum as well as coordination games.
We show that OMWU contracts volume, providing an alternative understanding for its known convergent behavior.
We also prove a no-free-lunch type of theorem, in the sense that when examining coordination games the roles are reversed: OMWU expands volume exponentially fast, whereas MWU contracts.
arXiv Detail & Related papers (2020-05-28T13:47:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.