Mastering Strategy Card Game (Hearthstone) with Improved Techniques
- URL: http://arxiv.org/abs/2303.05197v2
- Date: Sun, 28 May 2023 14:19:07 GMT
- Title: Mastering Strategy Card Game (Hearthstone) with Improved Techniques
- Authors: Changnan Xiao, Yongxin Zhang, Xuefeng Huang, Qinhan Huang, Jie Chen,
Peng Sun
- Abstract summary: Strategy card game is demanding on the intelligent game-play and can be an ideal test-bench for AI.
Previous work combines an end-to-end policy function and an optimistic smooth fictitious play.
In this work, we apply such algorithms to Hearthstone, a famous commercial game that is more complicated in game rules and mechanisms.
- Score: 8.399453146308502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Strategy card game is a well-known genre that is demanding on the intelligent
game-play and can be an ideal test-bench for AI. Previous work combines an
end-to-end policy function and an optimistic smooth fictitious play, which
shows promising performances on the strategy card game Legend of Code and
Magic. In this work, we apply such algorithms to Hearthstone, a famous
commercial game that is more complicated in game rules and mechanisms. We
further propose several improved techniques and consequently achieve
significant progress. For a machine-vs-human test we invite a Hearthstone
streamer whose best rank was top 10 of the official league in China region that
is estimated to be of millions of players. Our models defeat the human player
in all Best-of-5 tournaments of full games (including both deck building and
battle), showing a strong capability of decision making.
Related papers
- Learning to Beat ByteRL: Exploitability of Collectible Card Game Agents [3.5036467860577307]
We present preliminary analysis results of ByteRL, the state-of-the-art agent in Legends of Code and Magic and Hearthstone.
Although ByteRL beat a top-10 Hearthstone player from China, we show that its play in Legends of Code and Magic is highly exploitable.
arXiv Detail & Related papers (2024-04-25T15:48:40Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - AlphaZero Gomoku [9.434566356382529]
We broaden the use of AlphaZero to Gomoku, an age-old tactical board game also referred to as "Five in a Row"
Our tests demonstrate AlphaZero's versatility in adapting to games other than Go.
arXiv Detail & Related papers (2023-09-04T00:20:06Z) - Mastering Strategy Card Game (Legends of Code and Magic) via End-to-End
Policy and Optimistic Smooth Fictitious Play [11.480308614644041]
We study a two-stage strategy card game Legends of Code and Magic.
We propose an end-to-end policy to address the difficulties that arise in multi-stage game.
Our approach wins double championships of COG2022 competition.
arXiv Detail & Related papers (2023-03-07T17:55:28Z) - DanZero: Mastering GuanDan Game with Reinforcement Learning [121.93690719186412]
Card game AI has always been a hot topic in the research of artificial intelligence.
In this paper, we are devoted to developing an AI program for a more complex card game, GuanDan.
We propose the first AI program DanZero for GuanDan using reinforcement learning technique.
arXiv Detail & Related papers (2022-10-31T06:29:08Z) - Mastering the Game of Stratego with Model-Free Multiagent Reinforcement
Learning [86.37438204416435]
Stratego is one of the few iconic board games that Artificial Intelligence (AI) has not yet mastered.
Decisions in Stratego are made over a large number of discrete actions with no obvious link between action and outcome.
DeepNash beats existing state-of-the-art AI methods in Stratego and achieved a yearly (2022) and all-time top-3 rank on the Gravon games platform.
arXiv Detail & Related papers (2022-06-30T15:53:19Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Suphx: Mastering Mahjong with Deep Reinforcement Learning [114.68233321904623]
We design an AI for Mahjong, named Suphx, based on deep reinforcement learning with some newly introduced techniques.
Suphx has demonstrated stronger performance than most top human players in terms of stable rank.
This is the first time that a computer program outperforms most top human players in Mahjong.
arXiv Detail & Related papers (2020-03-30T16:18:16Z) - Evolutionary Approach to Collectible Card Game Arena Deckbuilding using
Active Genes [1.027974860479791]
In the arena game mode, before each match, a player has to construct his deck choosing cards one by one from the previously unknown options.
We propose a variant of the evolutionary algorithm that uses a concept of an active gene to reduce the range of the operators only to generation-specific subsequences of the genotype.
arXiv Detail & Related papers (2020-01-05T22:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.