Analyzing Skill Element in Online Fantasy Cricket
- URL: http://arxiv.org/abs/2512.22254v1
- Date: Wed, 24 Dec 2025 06:55:23 GMT
- Title: Analyzing Skill Element in Online Fantasy Cricket
- Authors: Sarthak Sarkar, Supratim Das, Purushottam Saha, Diganta Mukherjee, Tridib Mukherjee,
- Abstract summary: We develop a statistical framework to assess the role of skill in determining success on online fantasy cricket platforms.<n>Strategy performance is evaluated based on points, ranks, and payoff under two contest structures Mega and 4x or Nothing.<n>To capture adaptive behavior, we introduce a dynamic tournament model in which agent populations evolve through a softmax reweighting mechanism.
- Score: 1.6093668627931699
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Online fantasy cricket has emerged as large-scale competitive systems in which participants construct virtual teams and compete based on real-world player performances. This massive growth has been accompanied by important questions about whether outcomes are primarily driven by skill or chance. We develop a statistical framework to assess the role of skill in determining success on these platforms. We construct and analyze a range of deterministic and stochastic team selection strategies, based on recent form, historical statistics, statistical optimization, and multi-criteria decision making. Strategy performance is evaluated based on points, ranks, and payoff under two contest structures Mega and 4x or Nothing. An extensive comparison between different strategies is made to find an optimal set of strategies. To capture adaptive behavior, we further introduce a dynamic tournament model in which agent populations evolve through a softmax reweighting mechanism proportional to positive payoff realizations. We demonstrate our work by running extensive numerical experiments on the IPL 2024 dataset. The results provide quantitative evidence in favor of the skill element present in online fantasy cricket platforms.
Related papers
- CATArena: Evaluation of LLM Agents through Iterative Tournament Competitions [49.02422075498554]
Large Language Model (LLM) agents have evolved from basic text generation to autonomously completing complex tasks through interaction with external tools.<n>In this work, we emphasize the importance of learning ability, including both self-improvement and peer-learning, as a core driver for agent evolution toward human-level intelligence.<n>We propose an iterative, competitive peer-learning framework, which allows agents to refine and optimize their strategies through repeated interactions and feedback.
arXiv Detail & Related papers (2025-10-30T15:22:53Z) - Who is a Better Player: LLM against LLM [53.46608216197315]
We propose an adversarial benchmarking framework to assess the comprehensive performance of Large Language Models (LLMs) through board games competition.<n>We introduce Qi Town, a specialized evaluation platform that supports 5 widely played games and involves 20 LLM-driven players.
arXiv Detail & Related papers (2025-08-05T06:41:47Z) - MetaScale: Test-Time Scaling with Evolving Meta-Thoughts [51.35594569020857]
Experimental results demonstrate that MetaScale consistently outperforms standard inference approaches.<n> METASCALE scales more effectively with increasing sampling budgets and produces more structured, expert-level responses.
arXiv Detail & Related papers (2025-03-17T17:59:54Z) - Optimizing Fantasy Sports Team Selection with Deep Reinforcement Learning [0.2399911126932527]
We develop a model that can adaptively select players to maximize the team's potential performance.<n>Our approach leverages historical player data to train RL algorithms, which then predict future performance and optimize team composition.<n>Our results show that RL-based strategies provide valuable insights into player selection in fantasy sports.
arXiv Detail & Related papers (2024-12-26T13:36:18Z) - An Integrated Framework for Team Formation and Winner Prediction in the
FIRST Robotics Competition: Model, Algorithm, and Analysis [0.0]
We apply our method to the drafting process of the FIRST Robotics competition.
First, we develop a method that could extrapolate individual members' performance based on overall team performance.
An alliance optimization algorithm is developed to optimize team formation and a deep neural network model is trained to predict the winning team.
arXiv Detail & Related papers (2024-01-06T23:11:50Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Benchmarking Robustness and Generalization in Multi-Agent Systems: A
Case Study on Neural MMO [50.58083807719749]
We present the results of the second Neural MMO challenge, hosted at IJCAI 2022, which received 1600+ submissions.
This competition targets robustness and generalization in multi-agent systems.
We will open-source our benchmark including the environment wrapper, baselines, a visualization tool, and selected policies for further research.
arXiv Detail & Related papers (2023-08-30T07:16:11Z) - Pick Your Battles: Interaction Graphs as Population-Level Objectives for
Strategic Diversity [49.68758494467258]
We study how to construct diverse populations of agents by carefully structuring how individuals within a population interact.
Our approach is based on interaction graphs, which control the flow of information between agents during training.
We provide evidence for the importance of diversity in multi-agent training and analyse the effect of applying different interaction graphs on the training trajectories, diversity and performance of populations in a range of games.
arXiv Detail & Related papers (2021-10-08T11:29:52Z) - Evaluating Team Skill Aggregation in Online Competitive Games [4.168733556014873]
We present an analysis of the impact of two new aggregation methods on the predictive performance of rating systems.
Our evaluations show the superiority of the MAX method over the other two methods in the majority of the tested cases.
Results of this study highlight the necessity of devising more elaborated methods for calculating a team's performance.
arXiv Detail & Related papers (2021-06-21T20:17:36Z) - Competitive Balance in Team Sports Games [8.321949054700086]
We show that using final score difference provides yet a better prediction metric for competitive balance.
We also show that a linear model trained on a carefully selected set of team and individual features achieves almost the performance of the more powerful neural network model.
arXiv Detail & Related papers (2020-06-24T14:19:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.