Machine learning models for DOTA 2 outcomes prediction
- URL: http://arxiv.org/abs/2106.01782v1
- Date: Thu, 3 Jun 2021 12:10:26 GMT
- Title: Machine learning models for DOTA 2 outcomes prediction
- Authors: Kodirjon Akhmedov and Anh Huy Phan
- Abstract summary: This research paper predominantly focuses on building predictive machine and deep learning models to identify the outcome of the Dota 2 MOBA game.
Three models were investigated and compared: Linear Regression (LR), Neural Networks (NN), and a type of recurrent neural network Long Short-Term Memory (LSTM)
- Score: 8.388178167818635
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prediction of the real-time multiplayer online battle arena (MOBA) games'
match outcome is one of the most important and exciting tasks in Esports
analytical research. This research paper predominantly focuses on building
predictive machine and deep learning models to identify the outcome of the Dota
2 MOBA game using the new method of multi-forward steps predictions. Three
models were investigated and compared: Linear Regression (LR), Neural Networks
(NN), and a type of recurrent neural network Long Short-Term Memory (LSTM). In
order to achieve the goals, we developed a data collecting python server using
Game State Integration (GSI) to track the real-time data of the players. Once
the exploratory feature analysis and tuning hyper-parameters were done, our
models' experiments took place on different players with dissimilar backgrounds
of playing experiences. The achieved accuracy scores depend on the
multi-forward prediction parameters, which for the worse case in linear
regression 69\% but on average 82\%, while in the deep learning models hit the
utmost accuracy of prediction on average 88\% for NN, and 93\% for LSTM models.
Related papers
- What Do Learning Dynamics Reveal About Generalization in LLM Reasoning? [83.83230167222852]
We find that a model's generalization behavior can be effectively characterized by a training metric we call pre-memorization train accuracy.
By connecting a model's learning behavior to its generalization, pre-memorization train accuracy can guide targeted improvements to training strategies.
arXiv Detail & Related papers (2024-11-12T09:52:40Z) - Physics-Inspired Deep Learning and Transferable Models for Bridge Scour Prediction [2.451326684641447]
We introduce scour physics-inspired neural networks (SPINNs) for bridge scour prediction using deep learning.
SPINNs integrate physics-based, empirical equations into deep neural networks and are trained using site-specific historical scour monitoring data.
Despite variation in performance, SPINNs outperformed pure data-driven models in the majority of cases.
arXiv Detail & Related papers (2024-07-01T13:08:09Z) - Machine Learning for Soccer Match Result Prediction [0.9002260638342727]
This chapter discusses available datasets, the types of models and features, and ways of evaluating model performance.
The aim of this chapter is to give a broad overview of the current state and potential future developments in machine learning for soccer match results prediction.
arXiv Detail & Related papers (2024-03-12T14:00:50Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - Optimizing Offensive Gameplan in the National Basketball Association
with Machine Learning [0.0]
ORTG (Offensive Rating) was developed by Dean Oliver.
In this paper, the statistic ORTG was found to have a correlation with different NBA playtypes.
Using the accuracy of the models as a justification, the next step was to optimize the output of the model.
arXiv Detail & Related papers (2023-08-13T22:03:35Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Boosted Dynamic Neural Networks [53.559833501288146]
A typical EDNN has multiple prediction heads at different layers of the network backbone.
To optimize the model, these prediction heads together with the network backbone are trained on every batch of training data.
Treating training and testing inputs differently at the two phases will cause the mismatch between training and testing data distributions.
We formulate an EDNN as an additive model inspired by gradient boosting, and propose multiple training techniques to optimize the model effectively.
arXiv Detail & Related papers (2022-11-30T04:23:12Z) - Confidence-Nets: A Step Towards better Prediction Intervals for
regression Neural Networks on small datasets [0.0]
We propose an ensemble method that attempts to estimate the uncertainty of predictions, increase their accuracy and provide an interval for the expected variation.
The proposed method is tested on various datasets, and a significant improvement in the performance of the neural network model is seen.
arXiv Detail & Related papers (2022-10-31T06:38:40Z) - Machine Learning in Sports: A Case Study on Using Explainable Models for
Predicting Outcomes of Volleyball Matches [0.0]
This paper explores a two-phased Explainable Artificial Intelligence(XAI) approach to predict outcomes of matches in the Brazilian volleyball League (SuperLiga)
In the first phase, we directly use the interpretable rule-based ML models that provide a global understanding of the model's behaviors.
In the second phase, we construct non-linear models such as Support Vector Machine (SVM) and Deep Neural Network (DNN) to obtain predictive performance on the volleyball matches' outcomes.
arXiv Detail & Related papers (2022-06-18T18:09:15Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.