AlphaDou: High-Performance End-to-End Doudizhu AI Integrating Bidding
- URL: http://arxiv.org/abs/2407.10279v2
- Date: Fri, 13 Sep 2024 15:17:06 GMT
- Title: AlphaDou: High-Performance End-to-End Doudizhu AI Integrating Bidding
- Authors: Chang Lei, Huan Lei,
- Abstract summary: This paper modifies the Deep Monte Carlo algorithm framework by using reinforcement learning to obtain a neural network that simultaneously estimates win rates and expectations.
The modified algorithm enables the AI to perform the full range of tasks in the Doudizhu game, including bidding and cardplay.
- Score: 6.177038245239759
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence for card games has long been a popular topic in AI research. In recent years, complex card games like Mahjong and Texas Hold'em have been solved, with corresponding AI programs reaching the level of human experts. However, the game of Doudizhu presents significant challenges due to its vast state/action space and unique characteristics involving reasoning about competition and cooperation, making the game extremely difficult to solve.The RL model Douzero, trained using the Deep Monte Carlo algorithm framework, has shown excellent performance in Doudizhu. However, there are differences between its simplified game environment and the actual Doudizhu environment, and its performance is still a considerable distance from that of human experts. This paper modifies the Deep Monte Carlo algorithm framework by using reinforcement learning to obtain a neural network that simultaneously estimates win rates and expectations. The action space is pruned using expectations, and strategies are generated based on win rates. The modified algorithm enables the AI to perform the full range of tasks in the Doudizhu game, including bidding and cardplay. The model was trained in a actual Doudizhu environment and achieved state-of-the-art performance among publicly available models. We hope that this new framework will provide valuable insights for AI development in other bidding-based games.
Related papers
- Instruction-Driven Game Engine: A Poker Case Study [53.689520884467065]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game descriptions and generate game-play processes.
We train the IDGE in a curriculum manner that progressively increases its exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, which not only supports a wide range of poker variants but also allows for highly individualized new poker games through natural language inputs.
arXiv Detail & Related papers (2024-10-17T11:16:27Z) - Instruction-Driven Game Engines on Large Language Models [59.280666591243154]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game rules.
We train the IDGE in a curriculum manner that progressively increases the model's exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, a universally cherished card game.
arXiv Detail & Related papers (2024-03-30T08:02:16Z) - DouRN: Improving DouZero by Residual Neural Networks [1.6013543712340956]
Doudizhu is a card game that combines elements of cooperation and confrontation, resulting in a large state and action space.
In 2021, a Doudizhu program called DouZero surpassed previous models without prior knowledge by utilizing traditional Monte Carlo methods and multilayer perceptrons.
Our findings demonstrate that this model significantly improves the winning rate within the same training time.
arXiv Detail & Related papers (2024-03-21T03:25:49Z) - Mastering the Game of Guandan with Deep Reinforcement Learning and
Behavior Regulating [16.718186690675164]
We propose a framework named GuanZero for AI agents to master the game of Guandan.
The main contribution of this paper is about regulating agents' behavior through a carefully designed neural network encoding scheme.
arXiv Detail & Related papers (2024-02-21T07:26:06Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - DanZero: Mastering GuanDan Game with Reinforcement Learning [121.93690719186412]
Card game AI has always been a hot topic in the research of artificial intelligence.
In this paper, we are devoted to developing an AI program for a more complex card game, GuanDan.
We propose the first AI program DanZero for GuanDan using reinforcement learning technique.
arXiv Detail & Related papers (2022-10-31T06:29:08Z) - DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning [65.00325925262948]
We propose a conceptually simple yet effective DouDizhu AI system, namely DouZero.
DouZero enhances traditional Monte-Carlo methods with deep neural networks, action encoding, and parallel actors.
It was ranked the first in the Botzone leaderboard among 344 AI agents.
arXiv Detail & Related papers (2021-06-11T02:45:51Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.