On the Power of Refined Skat Selection
- URL: http://arxiv.org/abs/2104.02997v1
- Date: Wed, 7 Apr 2021 08:54:58 GMT
- Title: On the Power of Refined Skat Selection
- Authors: Stefan Edelkamp
- Abstract summary: Skat is a fascinating card game, show-casing many of the intrinsic challenges for modern AI systems.
We propose hard expert-rules and a scoring function based on refined skat evaluation features.
Experiments emphasize the impact of the refined skat putting algorithm on the playing performance of the bots.
- Score: 1.3706331473063877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Skat is a fascinating combinatorial card game, show-casing many of the
intrinsic challenges for modern AI systems such as cooperative and adversarial
behaviors (among the players), randomness (in the deal), and partial knowledge
(due to hidden cards). Given the larger number of tricks and higher degree of
uncertainty, reinforcement learning is less effective compared to classical
board games like Chess and Go. As within the game of Bridge, in Skat we have a
bidding and trick-taking stage. Prior to the trick-taking and as part of the
bidding process, one phase in the game is to select two skat cards, whose
quality may influence subsequent playing performance drastically. This paper
looks into different skat selection strategies. Besides predicting the
probability of winning and other hand strength functions we propose hard
expert-rules and a scoring functions based on refined skat evaluation features.
Experiments emphasize the impact of the refined skat putting algorithm on the
playing performance of the bots, especially for AI bidding and AI game
selection.
Related papers
- Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - Mastering Strategy Card Game (Legends of Code and Magic) via End-to-End
Policy and Optimistic Smooth Fictitious Play [11.480308614644041]
We study a two-stage strategy card game Legends of Code and Magic.
We propose an end-to-end policy to address the difficulties that arise in multi-stage game.
Our approach wins double championships of COG2022 competition.
arXiv Detail & Related papers (2023-03-07T17:55:28Z) - Evolving Evaluation Functions for Collectible Card Game AI [1.370633147306388]
We presented a study regarding two important aspects of evolving feature-based game evaluation functions.
The choice of genome representation and the choice of opponent used to test the model were studied.
We encoded our experiments in a programming game, Legends of Code and Magic, used in Strategy Card Game AI Competition.
arXiv Detail & Related papers (2021-05-03T18:39:06Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Knowledge-Based Paranoia Search in Trick-Taking [1.3706331473063877]
This paper proposes emphknowledge-based paraonoia search (KBPS) to find forced wins during trick-taking in the card game Skat.
It combines efficient partial information game-tree search with knowledge representation and reasoning.
arXiv Detail & Related papers (2021-04-07T09:12:45Z) - TotalBotWar: A New Pseudo Real-time Multi-action Game Challenge and
Competition for AI [62.997667081978825]
TotalBotWar is a new pseudo real-time multi-action challenge for game AI.
The game is based on the popular TotalWar games series where players manage an army to defeat the opponent's one.
arXiv Detail & Related papers (2020-09-18T09:13:56Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.