Evolutionary Approach to Collectible Card Game Arena Deckbuilding using
Active Genes
- URL: http://arxiv.org/abs/2001.01326v2
- Date: Wed, 13 May 2020 12:27:51 GMT
- Title: Evolutionary Approach to Collectible Card Game Arena Deckbuilding using
Active Genes
- Authors: Jakub Kowalski, Rados{\l}aw Miernik
- Abstract summary: In the arena game mode, before each match, a player has to construct his deck choosing cards one by one from the previously unknown options.
We propose a variant of the evolutionary algorithm that uses a concept of an active gene to reduce the range of the operators only to generation-specific subsequences of the genotype.
- Score: 1.027974860479791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we evolve a card-choice strategy for the arena mode of Legends
of Code and Magic, a programming game inspired by popular collectible card
games like Hearthstone or TES: Legends. In the arena game mode, before each
match, a player has to construct his deck choosing cards one by one from the
previously unknown options. Such a scenario is difficult from the optimization
point of view, as not only the fitness function is non-deterministic, but its
value, even for a given problem instance, is impossible to be calculated
directly and can only be estimated with simulation-based approaches. We propose
a variant of the evolutionary algorithm that uses a concept of an active gene
to reduce the range of the operators only to generation-specific subsequences
of the genotype. Thus, we batched learning process and constrained evolutionary
updates only to the cards relevant for the particular draft, without forgetting
the knowledge from the previous tests. We developed and tested various
implementations of this idea, investigating their performance by taking into
account the computational cost of each variant. Performed experiments show that
some of the introduced active-genes algorithms tend to learn faster and produce
statistically better draft policies than the compared methods.
Related papers
- Optimizing Hearthstone Agents using an Evolutionary Algorithm [0.0]
This paper proposes the use of evolutionary algorithms (EAs) to develop agents who play a card game, Hearthstone.
Agents feature self-learning by means of a competitive coevolutionary training approach.
One of the agents developed through the proposed approach was runner-up (best 6%) in an international Hearthstone Artificial Intelligence (AI) competition.
arXiv Detail & Related papers (2024-10-25T16:49:11Z) - Evolutionary Tabletop Game Design: A Case Study in the Risk Game [0.1474723404975345]
This work proposes an extension of the approach for tabletop games, evaluating the process by generating variants of Risk.
We achieved this using a genetic algorithm to evolve the chosen parameters, as well as a rules-based agent to test the games.
Results show the creation of new variations of the original game with smaller maps, resulting in shorter matches.
arXiv Detail & Related papers (2023-10-30T20:53:26Z) - No-Regret Learning in Time-Varying Zero-Sum Games [99.86860277006318]
Learning from repeated play in a fixed zero-sum game is a classic problem in game theory and online learning.
We develop a single parameter-free algorithm that simultaneously enjoys favorable guarantees under three performance measures.
Our algorithm is based on a two-layer structure with a meta-algorithm learning over a group of black-box base-learners satisfying a certain property.
arXiv Detail & Related papers (2022-01-30T06:10:04Z) - Spatial State-Action Features for General Games [5.849736173068868]
We formulate a design and efficient implementation of spatial state-action features for general games.
These are patterns that can be trained to incentivise or disincentivise actions based on whether or not they match variables of the state in a local area.
We propose an efficient approach for evaluating active features for any given set of features.
arXiv Detail & Related papers (2022-01-17T13:34:04Z) - Evolving Evaluation Functions for Collectible Card Game AI [1.370633147306388]
We presented a study regarding two important aspects of evolving feature-based game evaluation functions.
The choice of genome representation and the choice of opponent used to test the model were studied.
We encoded our experiments in a programming game, Legends of Code and Magic, used in Strategy Card Game AI Competition.
arXiv Detail & Related papers (2021-05-03T18:39:06Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit
Feedback [51.21673420940346]
Combinatorial bandits generalize multi-armed bandits, where the agent chooses sets of arms and observes a noisy reward for each arm contained in the chosen set.
We focus on the pure-exploration problem of identifying the best arm with fixed confidence, as well as a more general setting, where the structure of the answer set differs from the one of the action set.
Based on a projection-free online learning algorithm for finite polytopes, it is the first computationally efficient algorithm which is convexally optimal and has competitive empirical performance.
arXiv Detail & Related papers (2021-01-21T10:35:09Z) - Faster Algorithms for Optimal Ex-Ante Coordinated Collusive Strategies
in Extensive-Form Zero-Sum Games [123.76716667704625]
We focus on the problem of finding an optimal strategy for a team of two players that faces an opponent in an imperfect-information zero-sum extensive-form game.
In that setting, it is known that the best the team can do is sample a profile of potentially randomized strategies (one per player) from a joint (a.k.a. correlated) probability distribution at the beginning of the game.
We provide an algorithm that computes such an optimal distribution by only using profiles where only one of the team members gets to randomize in each profile.
arXiv Detail & Related papers (2020-09-21T17:51:57Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.