DraftRec: Personalized Draft Recommendation for Winning in Multi-Player
Online Battle Arena Games
- URL: http://arxiv.org/abs/2204.12750v1
- Date: Wed, 27 Apr 2022 07:46:17 GMT
- Title: DraftRec: Personalized Draft Recommendation for Winning in Multi-Player
Online Battle Arena Games
- Authors: Hojoon Lee, Dongyoon Hwang, Hyunseung Kim, Byungkun Lee, Jaegul Choo
- Abstract summary: This paper presents a personalized character recommendation system for Multiplayer Online Battle Arena (MOBA) games.
We propose DraftRec, a novel hierarchical model which recommends characters by considering each player's champion preferences and the interaction between the players.
We train and evaluate our model from a manually collected 280,000 matches of League of Legends and a publicly available 50,000 matches of Dota2.
- Score: 25.615312782084455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a personalized character recommendation system for
Multiplayer Online Battle Arena (MOBA) games which are considered as one of the
most popular online video game genres around the world. When playing MOBA
games, players go through a draft stage, where they alternately select a
virtual character to play. When drafting, players select characters by not only
considering their character preferences, but also the synergy and competence of
their team's character combination. However, the complexity of drafting induces
difficulties for beginners to choose the appropriate characters based on the
characters of their team while considering their own champion preferences. To
alleviate this problem, we propose DraftRec, a novel hierarchical model which
recommends characters by considering each player's champion preferences and the
interaction between the players. DraftRec consists of two networks: the player
network and the match network. The player network captures the individual
player's champion preference, and the match network integrates the complex
relationship between the players and their respective champions. We train and
evaluate our model from a manually collected 280,000 matches of League of
Legends and a publicly available 50,000 matches of Dota2. Empirically, our
method achieved state-of-the-art performance in character recommendation and
match outcome prediction task. Furthermore, a comprehensive user survey
confirms that DraftRec provides convincing and satisfying recommendations. Our
code and dataset are available at https://github.com/dojeon-ai/DraftRec.
Related papers
- Multi-agent Multi-armed Bandits with Stochastic Sharable Arm Capacities [69.34646544774161]
We formulate a new variant of multi-player multi-armed bandit (MAB) model, which captures arrival of requests to each arm and the policy of allocating requests to players.
The challenge is how to design a distributed learning algorithm such that players select arms according to the optimal arm pulling profile.
We design an iterative distributed algorithm, which guarantees that players can arrive at a consensus on the optimal arm pulling profile in only M rounds.
arXiv Detail & Related papers (2024-08-20T13:57:00Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Action2Score: An Embedding Approach To Score Player Action [4.383011485317949]
In most Multiplayer Online Battle Arena (MOBA) games, a player's rank is determined by the match result (win or lose)
We propose a novel embedding model that converts a player's actions into quantitative scores based on the actions' respective contribution to the team's victory.
Our model is built using a sequence-based deep learning model with a novel loss function working on the team match.
arXiv Detail & Related papers (2022-07-21T04:23:14Z) - Sequential Item Recommendation in the MOBA Game Dota 2 [64.8963467704218]
We explore the applicability of Sequential Item Recommendation (SIR) models in the context of purchase recommendations in Dota 2.
Our results show that models that consider the order of purchases are the most effective.
In contrast to other domains, we find RNN-based models to outperform the more recent Transformer-based architectures on Dota-350k.
arXiv Detail & Related papers (2022-01-17T14:19:17Z) - Bayesian Learning of Play Styles in Multiplayer Video Games [0.0]
We develop a hierarchical Bayesian regression approach for the online multiplayer game Battlefield 3.
We discover common play styles amongst our sample of Battlefield 3 players.
We find groups of players that exhibit overall high performance, as well as groupings of players that perform particularly well in specific game types, maps and roles.
arXiv Detail & Related papers (2021-12-14T14:48:24Z) - Player Modeling using Behavioral Signals in Competitive Online Games [4.168733556014873]
This paper focuses on the importance of addressing different aspects of playing behavior when modeling players for creating match-ups.
We engineer several behavioral features from a dataset of over 75,000 battle royale matches and create player models.
We then use the created models to predict ranks for different groups of players in the data.
arXiv Detail & Related papers (2021-11-29T22:53:17Z) - L2E: Learning to Exploit Your Opponent [66.66334543946672]
We propose a novel Learning to Exploit framework for implicit opponent modeling.
L2E acquires the ability to exploit opponents by a few interactions with different opponents during training.
We propose a novel opponent strategy generation algorithm that produces effective opponents for training automatically.
arXiv Detail & Related papers (2021-02-18T14:27:59Z) - Which Heroes to Pick? Learning to Draft in MOBA Games with Neural
Networks and Tree Search [33.23242783135013]
State-of-the-art drafting methods fail to consider the multi-round nature of a MOBA 5v5 match series.
We propose a novel drafting algorithm based on neural networks and Monte-Carlo tree search, named JueWuDraft.
We demonstrate the practicality and effectiveness of JueWuDraft when compared to state-of-the-art drafting methods.
arXiv Detail & Related papers (2020-12-18T11:19:00Z) - CRICTRS: Embeddings based Statistical and Semi Supervised Cricket Team
Recommendation System [6.628230604022489]
We propose a semi-supervised statistical approach to build a team recommendation system for cricket.
We design a qualitative and quantitative rating system which considers the strength of opposition also for evaluating player performance.
We also embark on a critical aspect of team composition, which includes the number of batsmen and bowlers in the team.
arXiv Detail & Related papers (2020-10-26T15:35:44Z) - Faster Algorithms for Optimal Ex-Ante Coordinated Collusive Strategies
in Extensive-Form Zero-Sum Games [123.76716667704625]
We focus on the problem of finding an optimal strategy for a team of two players that faces an opponent in an imperfect-information zero-sum extensive-form game.
In that setting, it is known that the best the team can do is sample a profile of potentially randomized strategies (one per player) from a joint (a.k.a. correlated) probability distribution at the beginning of the game.
We provide an algorithm that computes such an optimal distribution by only using profiles where only one of the team members gets to randomize in each profile.
arXiv Detail & Related papers (2020-09-21T17:51:57Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.