Prediction of Football Player Value using Bayesian Ensemble Approach
- URL: http://arxiv.org/abs/2206.13246v1
- Date: Fri, 24 Jun 2022 07:13:53 GMT
- Title: Prediction of Football Player Value using Bayesian Ensemble Approach
- Authors: Hansoo Lee, Bayu Adhi Tama, Meeyoung Cha
- Abstract summary: We present a case study on the key factors affecting the world's top soccer players' transfer fees based on the FIFA data analysis.
To predict each player's market value, we propose an improved LightGBM model using a Tree-structured Parzen Estimator (TPE) algorithm.
- Score: 13.163358022899335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The transfer fees of sports players have become astronomical. This is because
bringing players of great future value to the club is essential for their
survival. We present a case study on the key factors affecting the world's top
soccer players' transfer fees based on the FIFA data analysis. To predict each
player's market value, we propose an improved LightGBM model by optimizing its
hyperparameter using a Tree-structured Parzen Estimator (TPE) algorithm. We
identify prominent features by the SHapley Additive exPlanations (SHAP)
algorithm. The proposed method has been compared against the baseline
regression models (e.g., linear regression, lasso, elastic net, kernel ridge
regression) and gradient boosting model without hyperparameter optimization.
The optimized LightGBM model showed an excellent accuracy of approximately 3.8,
1.4, and 1.8 times on average compared to the regression baseline models, GBDT,
and LightGBM model in terms of RMSE. Our model offers interpretability in
deciding what attributes football clubs should consider in recruiting players
in the future.
Related papers
- General Preference Modeling with Preference Representations for Aligning Language Models [51.14207112118503]
We introduce preference representation learning, an approach that embeds responses into a latent space to capture intricate preference structures efficiently.
We also propose preference score-based General Preference Optimization (GPO), which generalizes reward-based reinforcement learning from human feedback.
Our method may enhance the alignment of foundation models with nuanced human values.
arXiv Detail & Related papers (2024-10-03T04:22:55Z) - Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization [75.1240295759264]
We propose an effective framework for Bridging and Modeling Correlations in pairwise data, named BMC.
We increase the consistency and informativeness of the pairwise preference signals through targeted modifications.
We identify that DPO alone is insufficient to model these correlations and capture nuanced variations.
arXiv Detail & Related papers (2024-08-14T11:29:47Z) - Iterative Nash Policy Optimization: Aligning LLMs with General Preferences via No-Regret Learning [55.65738319966385]
We propose a novel online algorithm, iterative Nash policy optimization (INPO)
Unlike previous methods, INPO bypasses the need for estimating the expected win rate for individual responses.
With an LLaMA-3-8B-based SFT model, INPO achieves a 42.6% length-controlled win rate on AlpacaEval 2.0 and a 37.8% win rate on Arena-Hard.
arXiv Detail & Related papers (2024-06-30T08:00:34Z) - Deep Learning and Transfer Learning Architectures for English Premier League Player Performance Forecasting [0.0]
This paper presents a groundbreaking model for forecasting English Premier League (EPL) player performance using convolutional neural networks (CNNs)
We evaluate Ridge regression, LightGBM and CNNs on the task of predicting upcoming player FPL score based on historical EPL data over the previous weeks.
Our optimal CNN architecture achieves better performance with fewer input features and even outperforms the best previous EPL player performance forecasting models in the literature.
arXiv Detail & Related papers (2024-05-03T18:13:52Z) - Self-Play Preference Optimization for Language Model Alignment [75.83359213697854]
Recent advancements suggest that directly working with preference probabilities can yield a more accurate reflection of human preferences.
We propose a self-play-based method for language model alignment, which treats the problem as a constant-sum two-player game.
Our approach, dubbed Self-Play Preference Optimization (SPPO), utilizes iterative policy updates to provably approximate the Nash equilibrium.
arXiv Detail & Related papers (2024-05-01T17:59:20Z) - Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning [55.96599486604344]
We introduce an approach aimed at enhancing the reasoning capabilities of Large Language Models (LLMs) through an iterative preference learning process.
We use Monte Carlo Tree Search (MCTS) to iteratively collect preference data, utilizing its look-ahead ability to break down instance-level rewards into more granular step-level signals.
The proposed algorithm employs Direct Preference Optimization (DPO) to update the LLM policy using this newly generated step-level preference data.
arXiv Detail & Related papers (2024-05-01T11:10:24Z) - Explainable artificial intelligence model for identifying Market Value
in Professional Soccer Players [2.2590064835234913]
Using data from about 12,000 players from Sofifa, the Boruta algorithm streamlined feature selection.
The Gradient Boosting Decision Tree (GBDT) model excelled in predictive accuracy, with an R-squared of 0.901 and a Root Mean Squared Error (RMSE) of 3,221,632.175.
arXiv Detail & Related papers (2023-11-08T11:01:32Z) - Optimizing Offensive Gameplan in the National Basketball Association
with Machine Learning [0.0]
ORTG (Offensive Rating) was developed by Dean Oliver.
In this paper, the statistic ORTG was found to have a correlation with different NBA playtypes.
Using the accuracy of the models as a justification, the next step was to optimize the output of the model.
arXiv Detail & Related papers (2023-08-13T22:03:35Z) - Mismatched No More: Joint Model-Policy Optimization for Model-Based RL [172.37829823752364]
We propose a single objective for jointly training the model and the policy, such that updates to either component increases a lower bound on expected return.
Our objective is a global lower bound on expected return, and this bound becomes tight under certain assumptions.
The resulting algorithm (MnM) is conceptually similar to a GAN.
arXiv Detail & Related papers (2021-10-06T13:43:27Z) - Markov Cricket: Using Forward and Inverse Reinforcement Learning to
Model, Predict And Optimize Batting Performance in One-Day International
Cricket [0.8122270502556374]
We model one-day international cricket games as Markov processes, applying forward and inverse Reinforcement Learning (RL) to develop three novel tools for the game.
We show that, when used as a proxy for remaining scoring resources, this approach outperforms the state-of-the-art Duckworth-Lewis-Stern method by 3 to 10 fold.
We envisage our prediction and simulation techniques may provide a fairer alternative for estimating final scores in interrupted games, while the inferred reward model may provide useful insights for the professional game to optimize playing strategy.
arXiv Detail & Related papers (2021-03-07T13:11:16Z) - Stochastic Optimization for Performative Prediction [31.876692592395777]
We study the difference between merely updating model parameters and deploying the new model.
We prove rates of convergence for both greedily deploying models after each update and for taking several updates before redeploying.
They illustrate how depending on the strength of performative effects, there exists a regime where either approach outperforms the other.
arXiv Detail & Related papers (2020-06-12T00:31:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.