Playing Against the Board: Rolling Horizon Evolutionary Algorithms
Against Pandemic
- URL: http://arxiv.org/abs/2103.15090v1
- Date: Sun, 28 Mar 2021 09:22:10 GMT
- Title: Playing Against the Board: Rolling Horizon Evolutionary Algorithms
Against Pandemic
- Authors: Konstantinos Sfikas and Antonios Liapis
- Abstract summary: This paper contends that collaborative board games pose a different challenge to artificial intelligence as it must balance short-term risk mitigation with long-term winning strategies.
This paper focuses on the exemplary collaborative board game Pandemic and presents a rolling evolutionary algorithm for this game.
- Score: 3.223284371460913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Competitive board games have provided a rich and diverse testbed for
artificial intelligence. This paper contends that collaborative board games
pose a different challenge to artificial intelligence as it must balance
short-term risk mitigation with long-term winning strategies. Collaborative
board games task all players to coordinate their different powers or pool their
resources to overcome an escalating challenge posed by the board and a
stochastic ruleset. This paper focuses on the exemplary collaborative board
game Pandemic and presents a rolling horizon evolutionary algorithm designed
specifically for this game. The complex way in which the Pandemic game state
changes in a stochastic but predictable way required a number of specially
designed forward models, macro-action representations for decision-making, and
repair functions for the genetic operations of the evolutionary algorithm.
Variants of the algorithm which explore optimistic versus pessimistic game
state evaluations, different mutation rates and event horizons are compared
against a baseline hierarchical policy agent. Results show that an evolutionary
approach via short-horizon rollouts can better account for the future dangers
that the board may introduce, and guard against them. Results highlight the
types of challenges that collaborative board games pose to artificial
intelligence, especially for handling multi-player collaboration interactions.
Related papers
- Evolutionary Tabletop Game Design: A Case Study in the Risk Game [0.1474723404975345]
This work proposes an extension of the approach for tabletop games, evaluating the process by generating variants of Risk.
We achieved this using a genetic algorithm to evolve the chosen parameters, as well as a rules-based agent to test the games.
Results show the creation of new variations of the original game with smaller maps, resulting in shorter matches.
arXiv Detail & Related papers (2023-10-30T20:53:26Z) - Opponent Modeling in Multiplayer Imperfect-Information Games [1.024113475677323]
We present an approach for opponent modeling in multiplayer imperfect-information games.
We run experiments against a variety of real opponents and exact Nash equilibrium strategies in three-player Kuhn poker.
Our algorithm significantly outperforms all of the agents, including the exact Nash equilibrium strategies.
arXiv Detail & Related papers (2022-12-12T16:48:53Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - A Survey of Decision Making in Adversarial Games [8.489977267389934]
In many practical applications, such as poker, chess, evader pursuing, drug interdiction, coast guard, cyber-security, and national defense, players often have apparently adversarial stances.
This paper provides a systematic survey on three main game models widely employed in adversarial games.
arXiv Detail & Related papers (2022-07-16T16:04:01Z) - Pick Your Battles: Interaction Graphs as Population-Level Objectives for
Strategic Diversity [49.68758494467258]
We study how to construct diverse populations of agents by carefully structuring how individuals within a population interact.
Our approach is based on interaction graphs, which control the flow of information between agents during training.
We provide evidence for the importance of diversity in multi-agent training and analyse the effect of applying different interaction graphs on the training trajectories, diversity and performance of populations in a range of games.
arXiv Detail & Related papers (2021-10-08T11:29:52Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Collaborative Agent Gameplay in the Pandemic Board Game [3.223284371460913]
Pandemic is an exemplar collaborative board game where all players coordinate to overcome challenges posed by events occurring during the game's progression.
This paper proposes an artificial agent which controls all players' actions and balances chances of winning versus risk of losing in this highly Evolutionary environment.
Results show that the proposed algorithm can find winning strategies more consistently in different games of varying difficulty.
arXiv Detail & Related papers (2021-03-21T13:18:20Z) - Learning to Play Imperfect-Information Games by Imitating an Oracle
Planner [77.67437357688316]
We consider learning to play multiplayer imperfect-information games with simultaneous moves and large state-action spaces.
Our approach is based on model-based planning.
We show that the planner is able to discover efficient playing strategies in the games of Clash Royale and Pommerman.
arXiv Detail & Related papers (2020-12-22T17:29:57Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.