Bandit Modeling of Map Selection in Counter-Strike: Global Offensive
- URL: http://arxiv.org/abs/2106.08888v1
- Date: Mon, 14 Jun 2021 23:47:36 GMT
- Title: Bandit Modeling of Map Selection in Counter-Strike: Global Offensive
- Authors: Guido Petri, Michael H. Stanley, Alec B. Hon, Alexander Dong, Peter
Xenopoulos, Cl\'audio Silva
- Abstract summary: In Counter-Strike: Global Offensive (CSGO) matches, two teams first pick and ban maps, or virtual worlds, to play.
We introduce a contextual bandit framework to tackle the problem of map selection in CSGO and to investigate teams' pick and ban decision-making.
We find that teams have suboptimal map choice policies with respect to both picking and banning.
We also define an approach for rewarding bans, which has not been explored in the bandit setting, and find that incorporating ban rewards improves model performance.
- Score: 55.41644538483948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many esports use a pick and ban process to define the parameters of a match
before it starts. In Counter-Strike: Global Offensive (CSGO) matches, two teams
first pick and ban maps, or virtual worlds, to play. Teams typically ban and
pick maps based on a variety of factors, such as banning maps which they do not
practice, or choosing maps based on the team's recent performance. We introduce
a contextual bandit framework to tackle the problem of map selection in CSGO
and to investigate teams' pick and ban decision-making. Using a data set of
over 3,500 CSGO matches and over 25,000 map selection decisions, we consider
different framings for the problem, different contexts, and different reward
metrics. We find that teams have suboptimal map choice policies with respect to
both picking and banning. We also define an approach for rewarding bans, which
has not been explored in the bandit setting, and find that incorporating ban
rewards improves model performance. Finally, we determine that usage of our
model could improve teams' predicted map win probability by up to 11% and raise
overall match win probabilities by 19.8% for evenly-matched teams.
Related papers
- Multi-agent Multi-armed Bandits with Stochastic Sharable Arm Capacities [69.34646544774161]
We formulate a new variant of multi-player multi-armed bandit (MAB) model, which captures arrival of requests to each arm and the policy of allocating requests to players.
The challenge is how to design a distributed learning algorithm such that players select arms according to the optimal arm pulling profile.
We design an iterative distributed algorithm, which guarantees that players can arrive at a consensus on the optimal arm pulling profile in only M rounds.
arXiv Detail & Related papers (2024-08-20T13:57:00Z) - GCN-WP -- Semi-Supervised Graph Convolutional Networks for Win
Prediction in Esports [84.55775845090542]
We propose a semi-supervised win prediction model for esports based on graph convolutional networks.
GCN-WP integrates over 30 features about the match and players and employs graph convolution to classify games based on their neighborhood.
Our model achieves state-of-the-art prediction accuracy when compared to machine learning or skill rating models for LoL.
arXiv Detail & Related papers (2022-07-26T21:38:07Z) - Collusion Detection in Team-Based Multiplayer Games [57.153233321515984]
We propose a system that detects colluding behaviors in team-based multiplayer games.
The proposed method analyzes the players' social relationships paired with their in-game behavioral patterns.
We then automate the detection using Isolation Forest, an unsupervised learning technique specialized in highlighting outliers.
arXiv Detail & Related papers (2022-03-10T02:37:39Z) - Sequential Item Recommendation in the MOBA Game Dota 2 [64.8963467704218]
We explore the applicability of Sequential Item Recommendation (SIR) models in the context of purchase recommendations in Dota 2.
Our results show that models that consider the order of purchases are the most effective.
In contrast to other domains, we find RNN-based models to outperform the more recent Transformer-based architectures on Dota-350k.
arXiv Detail & Related papers (2022-01-17T14:19:17Z) - Prediction of IPL Match Outcome Using Machine Learning Techniques [0.0]
The Indian Premier League (IPL) is a national cricket match where players are drawn from regional teams of India, National Team and also international team.
Many factors like live streaming, radio, TV broadcast made this league as popular among cricket fans.
The prediction of the outcome of the IPL matches is very important for online traders and sponsors.
arXiv Detail & Related papers (2021-09-30T09:45:34Z) - Optimal Team Economic Decisions in Counter-Strike [0.0]
We introduce a game-level win probability model to predict a team's chance of winning a game at the beginning of a given round.
Using our win probability model, we investigate optimal team spending decisions for important game scenarios.
Finally, we introduce a metric, Optimal Spending Error (OSE), to rank teams by how closely their spending decisions follow our predicted optimal spending decisions.
arXiv Detail & Related papers (2021-09-20T15:16:36Z) - Which Heroes to Pick? Learning to Draft in MOBA Games with Neural
Networks and Tree Search [33.23242783135013]
State-of-the-art drafting methods fail to consider the multi-round nature of a MOBA 5v5 match series.
We propose a novel drafting algorithm based on neural networks and Monte-Carlo tree search, named JueWuDraft.
We demonstrate the practicality and effectiveness of JueWuDraft when compared to state-of-the-art drafting methods.
arXiv Detail & Related papers (2020-12-18T11:19:00Z) - Valuing Player Actions in Counter-Strike: Global Offensive [4.621805808537653]
Using over 70 million in-game CSGO events, we demonstrate our framework's consistency and independence.
We also provide use cases demonstrating high-impact play identification and uncertainty estimation.
arXiv Detail & Related papers (2020-11-02T21:11:14Z) - Predictive Bandits [68.8204255655161]
We introduce and study a new class of bandit problems, referred to as predictive bandits.
In each round, the decision maker first decides whether to gather information about the rewards of particular arms.
The decision maker then selects an arm to be actually played in the round.
arXiv Detail & Related papers (2020-04-02T17:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.