Optimizing Hearthstone Agents using an Evolutionary Algorithm
- URL: http://arxiv.org/abs/2410.19681v1
- Date: Fri, 25 Oct 2024 16:49:11 GMT
- Title: Optimizing Hearthstone Agents using an Evolutionary Algorithm
- Authors: Pablo García-Sánchez, Alberto Tonda, Antonio J. Fernández-Leiva, Carlos Cotta,
- Abstract summary: This paper proposes the use of evolutionary algorithms (EAs) to develop agents who play a card game, Hearthstone.
Agents feature self-learning by means of a competitive coevolutionary training approach.
One of the agents developed through the proposed approach was runner-up (best 6%) in an international Hearthstone Artificial Intelligence (AI) competition.
- Score: 0.0
- License:
- Abstract: Digital collectible card games are not only a growing part of the video game industry, but also an interesting research area for the field of computational intelligence. This game genre allows researchers to deal with hidden information, uncertainty and planning, among other aspects. This paper proposes the use of evolutionary algorithms (EAs) to develop agents who play a card game, Hearthstone, by optimizing a data-driven decision-making mechanism that takes into account all the elements currently in play. Agents feature self-learning by means of a competitive coevolutionary training approach, whereby no external sparring element defined by the user is required for the optimization process. One of the agents developed through the proposed approach was runner-up (best 6%) in an international Hearthstone Artificial Intelligence (AI) competition. Our proposal performed remarkably well, even when it faced state-of-the-art techniques that attempted to take into account future game states, such as Monte-Carlo Tree search. This outcome shows how evolutionary computation could represent a considerable advantage in developing AIs for collectible card games such as Hearthstone.
Related papers
- Instruction-Driven Game Engine: A Poker Case Study [53.689520884467065]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game descriptions and generate game-play processes.
We train the IDGE in a curriculum manner that progressively increases its exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, which not only supports a wide range of poker variants but also allows for highly individualized new poker games through natural language inputs.
arXiv Detail & Related papers (2024-10-17T11:16:27Z) - Mastering the Game of Guandan with Deep Reinforcement Learning and
Behavior Regulating [16.718186690675164]
We propose a framework named GuanZero for AI agents to master the game of Guandan.
The main contribution of this paper is about regulating agents' behavior through a carefully designed neural network encoding scheme.
arXiv Detail & Related papers (2024-02-21T07:26:06Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - Games for Artificial Intelligence Research: A Review and Perspectives [4.44336371847479]
This paper reviews the games and game-based platforms for artificial intelligence research.
It provides guidance on matching particular types of artificial intelligence with suitable games for testing and matching particular needs in games with suitable artificial intelligence techniques.
arXiv Detail & Related papers (2023-04-26T03:42:31Z) - The Update-Equivalence Framework for Decision-Time Planning [78.44953498421854]
We introduce an alternative framework for decision-time planning that is not based on solving subgames, but rather on update equivalence.
We derive a provably sound search algorithm for fully cooperative games based on mirror descent and a search algorithm for adversarial games based on magnetic mirror descent.
arXiv Detail & Related papers (2023-04-25T20:28:55Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z) - From Chess and Atari to StarCraft and Beyond: How Game AI is Driving the
World of AI [10.80914659291096]
Game AI has established itself as a research area for developing and testing the most advanced forms of AI algorithms.
Advances in Game AI are starting to be extended to areas outside of games, such as robotics or the synthesis of chemicals.
arXiv Detail & Related papers (2020-02-24T18:28:54Z) - Scalable Psychological Momentum Forecasting in Esports [0.0]
We present ongoing work on an intelligent agent recommendation engine for competitive gaming.
We show that a learned representation of player psychological momentum, and of tilt, can be used to achieve state-of-the-art performance in pre- and post-draft win prediction.
arXiv Detail & Related papers (2020-01-30T11:57:40Z) - Evolutionary Approach to Collectible Card Game Arena Deckbuilding using
Active Genes [1.027974860479791]
In the arena game mode, before each match, a player has to construct his deck choosing cards one by one from the previously unknown options.
We propose a variant of the evolutionary algorithm that uses a concept of an active gene to reduce the range of the operators only to generation-specific subsequences of the genotype.
arXiv Detail & Related papers (2020-01-05T22:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.