A Survey of Decision Making in Adversarial Games
- URL: http://arxiv.org/abs/2207.07971v1
- Date: Sat, 16 Jul 2022 16:04:01 GMT
- Title: A Survey of Decision Making in Adversarial Games
- Authors: Xiuxian Li, Min Meng, Yiguang Hong, and Jie Chen
- Abstract summary: In many practical applications, such as poker, chess, evader pursuing, drug interdiction, coast guard, cyber-security, and national defense, players often have apparently adversarial stances.
This paper provides a systematic survey on three main game models widely employed in adversarial games.
- Score: 8.489977267389934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Game theory has by now found numerous applications in various fields,
including economics, industry, jurisprudence, and artificial intelligence,
where each player only cares about its own interest in a noncooperative or
cooperative manner, but without obvious malice to other players. However, in
many practical applications, such as poker, chess, evader pursuing, drug
interdiction, coast guard, cyber-security, and national defense, players often
have apparently adversarial stances, that is, selfish actions of each player
inevitably or intentionally inflict loss or wreak havoc on other players. Along
this line, this paper provides a systematic survey on three main game models
widely employed in adversarial games, i.e., zero-sum normal-form and
extensive-form games, Stackelberg (security) games, zero-sum differential
games, from an array of perspectives, including basic knowledge of game models,
(approximate) equilibrium concepts, problem classifications, research
frontiers, (approximate) optimal strategy seeking techniques, prevailing
algorithms, and practical applications. Finally, promising future research
directions are also discussed for relevant adversarial games.
Related papers
- Imperfect-Recall Games: Equilibrium Concepts and Their Complexity [74.01381499760288]
We investigate optimal decision making under imperfect recall, that is, when an agent forgets information it once held before.
In the framework of extensive-form games with imperfect recall, we analyze the computational complexities of finding equilibria in multiplayer settings.
arXiv Detail & Related papers (2024-06-23T00:27:28Z) - Securing Equal Share: A Principled Approach for Learning Multiplayer Symmetric Games [21.168085154982712]
equilibria in multiplayer games are neither unique nor non-exploitable.
This paper takes an initial step towards addressing these challenges by focusing on the natural objective of equal share.
We design a series of efficient algorithms, inspired by no-regret learning, that provably attain approximate equal share across various settings.
arXiv Detail & Related papers (2024-06-06T15:59:17Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Opponent Modeling in Multiplayer Imperfect-Information Games [1.024113475677323]
We present an approach for opponent modeling in multiplayer imperfect-information games.
We run experiments against a variety of real opponents and exact Nash equilibrium strategies in three-player Kuhn poker.
Our algorithm significantly outperforms all of the agents, including the exact Nash equilibrium strategies.
arXiv Detail & Related papers (2022-12-12T16:48:53Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Learning Correlated Equilibria in Mean-Field Games [62.14589406821103]
We develop the concepts of Mean-Field correlated and coarse-correlated equilibria.
We show that they can be efficiently learnt in emphall games, without requiring any additional assumption on the structure of the game.
arXiv Detail & Related papers (2022-08-22T08:31:46Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Playing Against the Board: Rolling Horizon Evolutionary Algorithms
Against Pandemic [3.223284371460913]
This paper contends that collaborative board games pose a different challenge to artificial intelligence as it must balance short-term risk mitigation with long-term winning strategies.
This paper focuses on the exemplary collaborative board game Pandemic and presents a rolling evolutionary algorithm for this game.
arXiv Detail & Related papers (2021-03-28T09:22:10Z) - Collaborative Agent Gameplay in the Pandemic Board Game [3.223284371460913]
Pandemic is an exemplar collaborative board game where all players coordinate to overcome challenges posed by events occurring during the game's progression.
This paper proposes an artificial agent which controls all players' actions and balances chances of winning versus risk of losing in this highly Evolutionary environment.
Results show that the proposed algorithm can find winning strategies more consistently in different games of varying difficulty.
arXiv Detail & Related papers (2021-03-21T13:18:20Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Learning to Resolve Alliance Dilemmas in Many-Player Zero-Sum Games [22.38765498549914]
We argue that a systematic study of many-player zero-sum games is a crucial element of artificial intelligence research.
Using symmetric zero-sum matrix games, we demonstrate formally that alliance formation may be seen as a social dilemma.
We show how reinforcement learning may be augmented with a peer-to-peer contract mechanism to discover and enforce alliances.
arXiv Detail & Related papers (2020-02-27T10:32:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.