Skill Issues: An Analysis of CS:GO Skill Rating Systems
- URL: http://arxiv.org/abs/2410.02831v1
- Date: Tue, 1 Oct 2024 23:19:31 GMT
- Title: Skill Issues: An Analysis of CS:GO Skill Rating Systems
- Authors: Mikel Bober-Irizar, Naunidh Dua, Max McGuinness,
- Abstract summary: Elo, Glicko2 and TrueSkill are studied through the lens of surrogate modelling.
We look at both overall performance and data efficiency, and perform a sensitivity analysis based on a large dataset of Counter-Strike: Global Offensive matches.
- Score: 0.24578723416255746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The meteoric rise of online games has created a need for accurate skill rating systems for tracking improvement and fair matchmaking. Although many skill rating systems are deployed, with various theoretical foundations, less work has been done at analysing the real-world performance of these algorithms. In this paper, we perform an empirical analysis of Elo, Glicko2 and TrueSkill through the lens of surrogate modelling, where skill ratings influence future matchmaking with a configurable acquisition function. We look both at overall performance and data efficiency, and perform a sensitivity analysis based on a large dataset of Counter-Strike: Global Offensive matches.
Related papers
- Understanding why shooters shoot -- An AI-powered engine for basketball
performance profiling [70.54015529131325]
Basketball is dictated by many variables, such as playstyle and game dynamics.
It is crucial that the performance profiles can reflect the diverse playstyles.
We present a tool that can visualize player performance profiles in a timely manner.
arXiv Detail & Related papers (2023-03-17T01:13:18Z) - QuickSkill: Novice Skill Estimation in Online Multiplayer Games [19.364132825629465]
Current matchmaking rating algorithms require considerable amount of games for learning the true skill of a new player.
This is known as the ''cold-start'' problem for matchmaking rating algorithms.
This paper proposes QuickSKill, a deep learning based novice skill estimation framework.
arXiv Detail & Related papers (2022-08-15T11:59:05Z) - Collusion Detection in Team-Based Multiplayer Games [57.153233321515984]
We propose a system that detects colluding behaviors in team-based multiplayer games.
The proposed method analyzes the players' social relationships paired with their in-game behavioral patterns.
We then automate the detection using Isolation Forest, an unsupervised learning technique specialized in highlighting outliers.
arXiv Detail & Related papers (2022-03-10T02:37:39Z) - Learning to Identify Top Elo Ratings: A Dueling Bandits Approach [27.495132915328025]
We propose an efficient online match scheduling algorithm to improve the sample efficiency of the Elo evaluation (for top players)
Specifically, we identify and match the top players through a dueling bandits framework and tailor the bandit algorithm to the gradient-based update of Elo.
Our algorithm has a regret guarantee $tildeO(sqrtT)$, sublinear in the number of competition rounds and has been extended to the multidimensional Elo ratings.
arXiv Detail & Related papers (2022-01-12T13:57:29Z) - Evaluating Team Skill Aggregation in Online Competitive Games [4.168733556014873]
We present an analysis of the impact of two new aggregation methods on the predictive performance of rating systems.
Our evaluations show the superiority of the MAX method over the other two methods in the majority of the tested cases.
Results of this study highlight the necessity of devising more elaborated methods for calculating a team's performance.
arXiv Detail & Related papers (2021-06-21T20:17:36Z) - The Evaluation of Rating Systems in Team-based Battle Royale Games [4.168733556014873]
This paper explores the utility of several metrics for evaluating three popular rating systems on a real-world dataset of over 25,000 team battle royale matches.
normalized discounted cumulative gain (NDCG) demonstrated more reliable performance and more flexibility.
arXiv Detail & Related papers (2021-05-28T19:22:07Z) - An Empirical Study on the Generalization Power of Neural Representations
Learned via Visual Guessing Games [79.23847247132345]
This work investigates how well an artificial agent can benefit from playing guessing games when later asked to perform on novel NLP downstream tasks such as Visual Question Answering (VQA)
We propose two ways to exploit playing guessing games: 1) a supervised learning scenario in which the agent learns to mimic successful guessing games and 2) a novel way for an agent to play by itself, called Self-play via Iterated Experience Learning (SPIEL)
arXiv Detail & Related papers (2021-01-31T10:30:48Z) - Game Plan: What AI can do for Football, and What Football can do for AI [83.79507996785838]
Predictive and prescriptive football analytics require new developments and progress at the intersection of statistical learning, game theory, and computer vision.
We illustrate that football analytics is a game changer of tremendous value, in terms of not only changing the game of football itself, but also in terms of what this domain can mean for the field of AI.
arXiv Detail & Related papers (2020-11-18T10:26:02Z) - Competitive Balance in Team Sports Games [8.321949054700086]
We show that using final score difference provides yet a better prediction metric for competitive balance.
We also show that a linear model trained on a carefully selected set of team and individual features achieves almost the performance of the more powerful neural network model.
arXiv Detail & Related papers (2020-06-24T14:19:07Z) - Evaluating and Rewarding Teamwork Using Cooperative Game Abstractions [103.3630903577951]
We use cooperative game theory to study teams of artificial RL agents as well as real world teams from professional sports.
We introduce a parametric model called cooperative game abstractions (CGAs) for estimating CFs from data.
We provide identification results and sample bounds complexity for CGA models as well as error bounds in the estimation of the Shapley Value using CGAs.
arXiv Detail & Related papers (2020-06-16T22:03:36Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.