A New Challenge: Approaching Tetris Link with AI
- URL: http://arxiv.org/abs/2004.00377v1
- Date: Wed, 1 Apr 2020 12:25:36 GMT
- Title: A New Challenge: Approaching Tetris Link with AI
- Authors: Matthias Muller-Brockhausen, Mike Preuss, Aske Plaat
- Abstract summary: This paper focuses on a new game, Tetris Link, a board game that is still lacking any scientific analysis.
We explore planning and two other approaches: Reinforcement Learning, Monte Carlo tree search.
We report on their relative performance in a tournament.
- Score: 1.2031796234206134
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Decades of research have been invested in making computer programs for
playing games such as Chess and Go. This paper focuses on a new game, Tetris
Link, a board game that is still lacking any scientific analysis. Tetris Link
has a large branching factor, hampering a traditional heuristic planning
approach. We explore heuristic planning and two other approaches: Reinforcement
Learning, Monte Carlo tree search. We document our approach and report on their
relative performance in a tournament. Curiously, the heuristic approach is
stronger than the planning/learning approaches. However, experienced human
players easily win the majority of the matches against the heuristic planning
AIs. We, therefore, surmise that Tetris Link is more difficult than expected.
We offer our findings to the community as a challenge to improve upon.
Related papers
- Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - JiangJun: Mastering Xiangqi by Tackling Non-Transitivity in Two-Player
Zero-Sum Games [15.500508239382583]
This paper focuses on Xiangqi, a traditional Chinese board game comparable in game-tree complexity to chess and shogi.
We introduce the JiangJun algorithm, an innovative combination of Monte-Carlo Tree Search (MCTS) and Policy Space Response Oracles (PSRO) designed to approximate a Nash equilibrium.
We evaluate the algorithm empirically using a WeChat mini program and achieve a Master level with a 99.41% win rate against human players.
arXiv Detail & Related papers (2023-08-09T05:48:58Z) - Know your Enemy: Investigating Monte-Carlo Tree Search with Opponent
Models in Pommerman [14.668309037894586]
In combination with Reinforcement Learning, Monte-Carlo Tree Search has shown to outperform human grandmasters in games such as Chess, Shogi and Go.
We investigate techniques that transform general-sum multiplayer games into single-player and two-player games.
arXiv Detail & Related papers (2023-05-22T16:39:20Z) - Introducing Tales of Tribute AI Competition [0.7639610349097472]
This paper presents a new AI challenge, the Tales of Tribute AI Competition (TOTAIC)
It is based on a two-player deck-building card game released with the High Isle chapter of The Elder Scrolls Online.
This paper introduces the competition framework, describes the rules of the game, and presents the results of a tournament between sample AI agents.
arXiv Detail & Related papers (2023-05-14T19:55:56Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Learning to Play Imperfect-Information Games by Imitating an Oracle
Planner [77.67437357688316]
We consider learning to play multiplayer imperfect-information games with simultaneous moves and large state-action spaces.
Our approach is based on model-based planning.
We show that the planner is able to discover efficient playing strategies in the games of Clash Royale and Pommerman.
arXiv Detail & Related papers (2020-12-22T17:29:57Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Enhanced Rolling Horizon Evolution Algorithm with Opponent Model
Learning: Results for the Fighting Game AI Competition [9.75720700239984]
We propose a novel algorithm that combines Rolling Horizon Evolution Algorithm (RHEA) with opponent model learning.
Our proposed bot with the policy-gradient-based opponent model is the only one without using Monte-Carlo Tree Search (MCTS) among top five bots in the 2019 competition.
arXiv Detail & Related papers (2020-03-31T04:44:33Z) - Suphx: Mastering Mahjong with Deep Reinforcement Learning [114.68233321904623]
We design an AI for Mahjong, named Suphx, based on deep reinforcement learning with some newly introduced techniques.
Suphx has demonstrated stronger performance than most top human players in terms of stable rank.
This is the first time that a computer program outperforms most top human players in Mahjong.
arXiv Detail & Related papers (2020-03-30T16:18:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.