Towards Playing Full MOBA Games with Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2011.12692v4
- Date: Thu, 31 Dec 2020 13:25:17 GMT
- Title: Towards Playing Full MOBA Games with Deep Reinforcement Learning
- Authors: Deheng Ye, Guibin Chen, Wen Zhang, Sheng Chen, Bo Yuan, Bo Liu, Jia
Chen, Zhao Liu, Fuhao Qiu, Hongsheng Yu, Yinyuting Yin, Bei Shi, Liang Wang,
Tengfei Shi, Qiang Fu, Wei Yang, Lanxiao Huang, Wei Liu
- Abstract summary: MOBA games, e.g., Honor of Kings, League of Legends, and Dota 2, pose grand challenges to AI systems.
We propose a MOBA AI learning paradigm that methodologically enables playing full MOBA games with deep reinforcement learning.
- Score: 34.153341961273554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: MOBA games, e.g., Honor of Kings, League of Legends, and Dota 2, pose grand
challenges to AI systems such as multi-agent, enormous state-action space,
complex action control, etc. Developing AI for playing MOBA games has raised
much attention accordingly. However, existing work falls short in handling the
raw game complexity caused by the explosion of agent combinations, i.e.,
lineups, when expanding the hero pool in case that OpenAI's Dota AI limits the
play to a pool of only 17 heroes. As a result, full MOBA games without
restrictions are far from being mastered by any existing AI system. In this
paper, we propose a MOBA AI learning paradigm that methodologically enables
playing full MOBA games with deep reinforcement learning. Specifically, we
develop a combination of novel and existing learning techniques, including
curriculum self-play learning, policy distillation, off-policy adaption,
multi-head value estimation, and Monte-Carlo tree-search, in training and
playing a large pool of heroes, meanwhile addressing the scalability issue
skillfully. Tested on Honor of Kings, a popular MOBA game, we show how to build
superhuman AI agents that can defeat top esports players. The superiority of
our AI is demonstrated by the first large-scale performance test of MOBA AI
agent in the literature.
Related papers
- Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - Sequential Item Recommendation in the MOBA Game Dota 2 [64.8963467704218]
We explore the applicability of Sequential Item Recommendation (SIR) models in the context of purchase recommendations in Dota 2.
Our results show that models that consider the order of purchases are the most effective.
In contrast to other domains, we find RNN-based models to outperform the more recent Transformer-based architectures on Dota-350k.
arXiv Detail & Related papers (2022-01-17T14:19:17Z) - Learning Diverse Policies in MOBA Games via Macro-Goals [16.91587630049241]
We propose a novel Macro-Goals Guided framework, called MGG, to learn diverse policies in MOBA games.
MGG abstracts strategies as macro-goals from human demonstrations and trains a Meta-Controller to predict these macro-goals.
We show that MGG can execute diverse policies in different matches and lineups, and also outperform the state-of-the-art methods over 102 heroes.
arXiv Detail & Related papers (2021-10-27T07:15:42Z) - The Dota 2 Bot Competition [0.0]
This paper presents and describes in detail the Dota 2 Bot competition and the Dota 2 AI framework that supports it.
This challenge aims to join both, MOBAs and AI/CI game competitions, inviting participants to submit AI controllers for the successful MOBA textitDefense of the Ancients 2 (Dota 2) to play in 1v1 matches.
arXiv Detail & Related papers (2021-03-04T10:49:47Z) - Reinforcement Learning Agents for Ubisoft's Roller Champions [0.26249027950824505]
We present our RL system for Ubisoft's Roller Champions, a 3v3 Competitive Multiplayer Sports Game played on an oval-shaped skating arena.
Our system is designed to keep up with agile, fast-paced development, taking 1--4 days to train a new model following gameplay changes.
We observe that the AIs develop sophisticated co-ordinated strategies, and can aid in balancing the game as an added bonus.
arXiv Detail & Related papers (2020-12-10T23:53:15Z) - Supervised Learning Achieves Human-Level Performance in MOBA Games: A
Case Study of Honor of Kings [37.534249771219926]
We present JueWu-SL, the first supervised-learning-based artificial intelligence (AI) program that achieves human-level performance in online battle arena (MOBA) games.
We integrate the macro-strategy and the micromanagement of MOBA-game-playing into neural networks in a supervised and end-to-end manner.
Tested on Honor of Kings, the most popular MOBA at present, our AI performs competitively at the level of High King players in standard 5v5 games.
arXiv Detail & Related papers (2020-11-25T08:45:55Z) - Suphx: Mastering Mahjong with Deep Reinforcement Learning [114.68233321904623]
We design an AI for Mahjong, named Suphx, based on deep reinforcement learning with some newly introduced techniques.
Suphx has demonstrated stronger performance than most top human players in terms of stable rank.
This is the first time that a computer program outperforms most top human players in Mahjong.
arXiv Detail & Related papers (2020-03-30T16:18:16Z) - Neural MMO v1.3: A Massively Multiagent Game Environment for Training
and Evaluating Neural Networks [48.5733173329785]
We present Neural MMO, a massively multiagent game environment inspired by MMOs.
We discuss our progress on two more general challenges in multiagent systems engineering for AI research: distributed infrastructure and game IO.
arXiv Detail & Related papers (2020-01-31T18:50:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.