Mastering the Game of Guandan with Deep Reinforcement Learning and
Behavior Regulating
- URL: http://arxiv.org/abs/2402.13582v1
- Date: Wed, 21 Feb 2024 07:26:06 GMT
- Title: Mastering the Game of Guandan with Deep Reinforcement Learning and
Behavior Regulating
- Authors: Yifan Yanggong, Hao Pan, Lei Wang
- Abstract summary: We propose a framework named GuanZero for AI agents to master the game of Guandan.
The main contribution of this paper is about regulating agents' behavior through a carefully designed neural network encoding scheme.
- Score: 16.718186690675164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Games are a simplified model of reality and often serve as a favored platform
for Artificial Intelligence (AI) research. Much of the research is concerned
with game-playing agents and their decision making processes. The game of
Guandan (literally, "throwing eggs") is a challenging game where even
professional human players struggle to make the right decision at times. In
this paper we propose a framework named GuanZero for AI agents to master this
game using Monte-Carlo methods and deep neural networks. The main contribution
of this paper is about regulating agents' behavior through a carefully designed
neural network encoding scheme. We then demonstrate the effectiveness of the
proposed framework by comparing it with state-of-the-art approaches.
Related papers
- Optimizing Hearthstone Agents using an Evolutionary Algorithm [0.0]
This paper proposes the use of evolutionary algorithms (EAs) to develop agents who play a card game, Hearthstone.
Agents feature self-learning by means of a competitive coevolutionary training approach.
One of the agents developed through the proposed approach was runner-up (best 6%) in an international Hearthstone Artificial Intelligence (AI) competition.
arXiv Detail & Related papers (2024-10-25T16:49:11Z) - You Have Thirteen Hours in Which to Solve the Labyrinth: Enhancing AI Game Masters with Function Calling [35.721053667746716]
This paper presents a novel approach to enhance AI game masters by leveraging function calling in the context of the table-top role-playing game "Jim Henson's Labyrinth: The Adventure Game"
Our methodology involves integrating game-specific controls through functions, which we show improves the narrative quality and state update consistency of the AI game master.
arXiv Detail & Related papers (2024-09-11T02:03:51Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - DanZero: Mastering GuanDan Game with Reinforcement Learning [121.93690719186412]
Card game AI has always been a hot topic in the research of artificial intelligence.
In this paper, we are devoted to developing an AI program for a more complex card game, GuanDan.
We propose the first AI program DanZero for GuanDan using reinforcement learning technique.
arXiv Detail & Related papers (2022-10-31T06:29:08Z) - Towards Controllable Agent in MOBA Games with Generative Modeling [0.45687771576879593]
We propose novel methods to develop action controllable agent that behaves like a human.
We devise a deep latent alignment neural network model for training agent, and a corresponding sampling algorithm for controlling an agent's action.
Both simulated and online experiments in the game Honor of Kings demonstrate the efficacy of the proposed methods.
arXiv Detail & Related papers (2021-12-15T13:09:22Z) - DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning [65.00325925262948]
We propose a conceptually simple yet effective DouDizhu AI system, namely DouZero.
DouZero enhances traditional Monte-Carlo methods with deep neural networks, action encoding, and parallel actors.
It was ranked the first in the Botzone leaderboard among 344 AI agents.
arXiv Detail & Related papers (2021-06-11T02:45:51Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - DeepCrawl: Deep Reinforcement Learning for Turn-based Strategy Games [137.86426963572214]
We introduce DeepCrawl, a fully-playable Roguelike prototype for iOS and Android in which all agents are controlled by policy networks trained using Deep Reinforcement Learning (DRL)
Our aim is to understand whether recent advances in DRL can be used to develop convincing behavioral models for non-player characters in videogames.
arXiv Detail & Related papers (2020-12-03T13:53:29Z) - Physically Embedded Planning Problems: New Challenges for Reinforcement
Learning [26.74526714574981]
Recent work in deep reinforcement learning (RL) has produced algorithms capable of mastering challenging games such as Go, chess, or shogi.
We introduce a set of physically embedded planning problems and make them publicly available.
We find that existing RL algorithms struggle to master even the simplest of their physically embedded counterparts.
arXiv Detail & Related papers (2020-09-11T16:56:33Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.