Introducing Tales of Tribute AI Competition
- URL: http://arxiv.org/abs/2305.08234v4
- Date: Sun, 19 May 2024 12:14:30 GMT
- Title: Introducing Tales of Tribute AI Competition
- Authors: Jakub Kowalski, Radosław Miernik, Katarzyna Polak, Dominik Budzki, Damian Kowalik,
- Abstract summary: This paper presents a new AI challenge, the Tales of Tribute AI Competition (TOTAIC)
It is based on a two-player deck-building card game released with the High Isle chapter of The Elder Scrolls Online.
This paper introduces the competition framework, describes the rules of the game, and presents the results of a tournament between sample AI agents.
- Score: 0.7639610349097472
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a new AI challenge, the Tales of Tribute AI Competition (TOTAIC), based on a two-player deck-building card game released with the High Isle chapter of The Elder Scrolls Online. Currently, there is no other AI competition covering Collectible Card Games (CCG) genre, and there has never been one that targets a deck-building game. Thus, apart from usual CCG-related obstacles to overcome, like randomness, hidden information, and large branching factor, the successful approach additionally requires long-term planning and versatility. The game can be tackled with multiple approaches, including classic adversarial search, single-player planning, and Neural Networks-based algorithms. This paper introduces the competition framework, describes the rules of the game, and presents the results of a tournament between sample AI agents.
Related papers
- Securing Equal Share: A Principled Approach for Learning Multiplayer Symmetric Games [21.168085154982712]
equilibria in multiplayer games are neither unique nor non-exploitable.
This paper takes an initial step towards addressing these challenges by focusing on the natural objective of equal share.
We design a series of efficient algorithms, inspired by no-regret learning, that provably attain approximate equal share across various settings.
arXiv Detail & Related papers (2024-06-06T15:59:17Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Summarizing Strategy Card Game AI Competition [1.027974860479791]
This paper concludes five years of AI competitions based on Legends of Code and Magic (LOCM), a small Collectible Card Game (CCG)
LOCM has been used in a number of publications related to areas such as game tree search algorithms, neural networks, evaluation functions, and CCG deckbuilding.
Although the COG 2022 edition was announced to be the last one, the game remains available and can be played using an online leaderboard arena.
arXiv Detail & Related papers (2023-05-19T16:49:36Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Playing Against the Board: Rolling Horizon Evolutionary Algorithms
Against Pandemic [3.223284371460913]
This paper contends that collaborative board games pose a different challenge to artificial intelligence as it must balance short-term risk mitigation with long-term winning strategies.
This paper focuses on the exemplary collaborative board game Pandemic and presents a rolling evolutionary algorithm for this game.
arXiv Detail & Related papers (2021-03-28T09:22:10Z) - Learning to Play Imperfect-Information Games by Imitating an Oracle
Planner [77.67437357688316]
We consider learning to play multiplayer imperfect-information games with simultaneous moves and large state-action spaces.
Our approach is based on model-based planning.
We show that the planner is able to discover efficient playing strategies in the games of Clash Royale and Pommerman.
arXiv Detail & Related papers (2020-12-22T17:29:57Z) - TotalBotWar: A New Pseudo Real-time Multi-action Game Challenge and
Competition for AI [62.997667081978825]
TotalBotWar is a new pseudo real-time multi-action challenge for game AI.
The game is based on the popular TotalWar games series where players manage an army to defeat the opponent's one.
arXiv Detail & Related papers (2020-09-18T09:13:56Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Enhanced Rolling Horizon Evolution Algorithm with Opponent Model
Learning: Results for the Fighting Game AI Competition [9.75720700239984]
We propose a novel algorithm that combines Rolling Horizon Evolution Algorithm (RHEA) with opponent model learning.
Our proposed bot with the policy-gradient-based opponent model is the only one without using Monte-Carlo Tree Search (MCTS) among top five bots in the 2019 competition.
arXiv Detail & Related papers (2020-03-31T04:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.