Learning Diverse Policies in MOBA Games via Macro-Goals
- URL: http://arxiv.org/abs/2110.14221v1
- Date: Wed, 27 Oct 2021 07:15:42 GMT
- Title: Learning Diverse Policies in MOBA Games via Macro-Goals
- Authors: Yiming Gao, Bei Shi, Xueying Du, Liang Wang, Guangwei Chen, Zhenjie
Lian, Fuhao Qiu, Guoan Han, Weixuan Wang, Deheng Ye, Qiang Fu, Wei Yang,
Lanxiao Huang
- Abstract summary: We propose a novel Macro-Goals Guided framework, called MGG, to learn diverse policies in MOBA games.
MGG abstracts strategies as macro-goals from human demonstrations and trains a Meta-Controller to predict these macro-goals.
We show that MGG can execute diverse policies in different matches and lineups, and also outperform the state-of-the-art methods over 102 heroes.
- Score: 16.91587630049241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, many researchers have made successful progress in building the AI
systems for MOBA-game-playing with deep reinforcement learning, such as on Dota
2 and Honor of Kings. Even though these AI systems have achieved or even
exceeded human-level performance, they still suffer from the lack of policy
diversity. In this paper, we propose a novel Macro-Goals Guided framework,
called MGG, to learn diverse policies in MOBA games. MGG abstracts strategies
as macro-goals from human demonstrations and trains a Meta-Controller to
predict these macro-goals. To enhance policy diversity, MGG samples macro-goals
from the Meta-Controller prediction and guides the training process towards
these goals. Experimental results on the typical MOBA game Honor of Kings
demonstrate that MGG can execute diverse policies in different matches and
lineups, and also outperform the state-of-the-art methods over 102 heroes.
Related papers
- DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - Model-Free Opponent Shaping [1.433758865948252]
We propose Model-Free Opponent Shaping (M-FOS) for general-sum games.
M-FOS learns in a meta-game in which each meta-step is an episode of the underlying ("inner") game.
It exploits naive learners and other, more sophisticated algorithms from the literature.
arXiv Detail & Related papers (2022-05-03T12:20:14Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Combining Off and On-Policy Training in Model-Based Reinforcement
Learning [77.34726150561087]
We propose a way to obtain off-policy targets using data from simulated games in MuZero.
Our results show that these targets speed up the training process and lead to faster convergence and higher rewards.
arXiv Detail & Related papers (2021-02-24T10:47:26Z) - DeepCrawl: Deep Reinforcement Learning for Turn-based Strategy Games [137.86426963572214]
We introduce DeepCrawl, a fully-playable Roguelike prototype for iOS and Android in which all agents are controlled by policy networks trained using Deep Reinforcement Learning (DRL)
Our aim is to understand whether recent advances in DRL can be used to develop convincing behavioral models for non-player characters in videogames.
arXiv Detail & Related papers (2020-12-03T13:53:29Z) - Towards Playing Full MOBA Games with Deep Reinforcement Learning [34.153341961273554]
MOBA games, e.g., Honor of Kings, League of Legends, and Dota 2, pose grand challenges to AI systems.
We propose a MOBA AI learning paradigm that methodologically enables playing full MOBA games with deep reinforcement learning.
arXiv Detail & Related papers (2020-11-25T12:52:33Z) - Supervised Learning Achieves Human-Level Performance in MOBA Games: A
Case Study of Honor of Kings [37.534249771219926]
We present JueWu-SL, the first supervised-learning-based artificial intelligence (AI) program that achieves human-level performance in online battle arena (MOBA) games.
We integrate the macro-strategy and the micromanagement of MOBA-game-playing into neural networks in a supervised and end-to-end manner.
Tested on Honor of Kings, the most popular MOBA at present, our AI performs competitively at the level of High King players in standard 5v5 games.
arXiv Detail & Related papers (2020-11-25T08:45:55Z) - Enhanced Rolling Horizon Evolution Algorithm with Opponent Model
Learning: Results for the Fighting Game AI Competition [9.75720700239984]
We propose a novel algorithm that combines Rolling Horizon Evolution Algorithm (RHEA) with opponent model learning.
Our proposed bot with the policy-gradient-based opponent model is the only one without using Monte-Carlo Tree Search (MCTS) among top five bots in the 2019 competition.
arXiv Detail & Related papers (2020-03-31T04:44:33Z) - HMRL: Hyper-Meta Learning for Sparse Reward Reinforcement Learning
Problem [107.52043871875898]
We develop a novel meta reinforcement learning framework called Hyper-Meta RL(HMRL) for sparse reward RL problems.
It is consisted with three modules including the cross-environment meta state embedding module which constructs a common meta state space to adapt to different environments.
Experiments with sparse-reward environments show the superiority of HMRL on both transferability and policy learning efficiency.
arXiv Detail & Related papers (2020-02-11T07:31:11Z) - Neural MMO v1.3: A Massively Multiagent Game Environment for Training
and Evaluating Neural Networks [48.5733173329785]
We present Neural MMO, a massively multiagent game environment inspired by MMOs.
We discuss our progress on two more general challenges in multiagent systems engineering for AI research: distributed infrastructure and game IO.
arXiv Detail & Related papers (2020-01-31T18:50:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.