Estimating player completion rate in mobile puzzle games using
reinforcement learning
- URL: http://arxiv.org/abs/2306.14626v1
- Date: Mon, 26 Jun 2023 12:00:05 GMT
- Title: Estimating player completion rate in mobile puzzle games using
reinforcement learning
- Authors: Jeppe Theiss Kristensen, Arturo Valdivia, Paolo Burelli
- Abstract summary: We train an RL agent and measure the number of moves required to complete a level.
This is then compared to the level completion rate of a large sample of real players.
We find that the strongest predictor of player completion rate for a level is the number of moves taken to complete a level of the 5% best runs of the agent on a given level.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we investigate whether it is plausible to use the performance of
a reinforcement learning (RL) agent to estimate the difficulty measured as the
player completion rate of different levels in the mobile puzzle game Lily's
Garden.For this purpose we train an RL agent and measure the number of moves
required to complete a level. This is then compared to the level completion
rate of a large sample of real players.We find that the strongest predictor of
player completion rate for a level is the number of moves taken to complete a
level of the ~5% best runs of the agent on a given level. A very interesting
observation is that, while in absolute terms, the agent is unable to reach
human-level performance across all levels, the differences in terms of
behaviour between levels are highly correlated to the differences in human
behaviour. Thus, despite performing sub-par, it is still possible to use the
performance of the agent to estimate, and perhaps further model, player
metrics.
Related papers
- Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - Behavioural Cloning in VizDoom [1.4999444543328293]
This paper describes methods for training autonomous agents to play the game "Doom 2" through Imitation Learning (IL)
We also explore how Reinforcement Learning (RL) compares to IL for humanness by comparing camera movement and trajectory data.
arXiv Detail & Related papers (2024-01-08T16:15:43Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Ordinal Regression for Difficulty Estimation of StepMania Levels [18.944506234623862]
We formalize and analyze the difficulty prediction task on StepMania levels as an ordinal regression (OR) task.
We evaluate many competitive OR and non-OR models, demonstrating that neural network-based models significantly outperform the state of the art.
We conclude with a user experiment showing our trained models' superiority over human labeling.
arXiv Detail & Related papers (2023-01-23T15:30:01Z) - Improving Deep Localized Level Analysis: How Game Logs Can Help [0.9645196221785693]
We present novel improvements to affect prediction by using a deep convolutional neural network (CNN) to predict player experience.
We test our approach on levels based on Super Mario Bros. (Infinite Mario Bros.) and Super Mario Bros.: The Lost Levels (Gwario)
arXiv Detail & Related papers (2022-12-07T00:05:16Z) - Mastering the Game of No-Press Diplomacy via Human-Regularized
Reinforcement Learning and Planning [95.78031053296513]
No-press Diplomacy is a complex strategy game involving both cooperation and competition.
We introduce a planning algorithm we call DiL-piKL that regularizes a reward-maximizing policy toward a human imitation-learned policy.
We show that DiL-piKL can be extended into a self-play reinforcement learning algorithm we call RL-DiL-piKL.
arXiv Detail & Related papers (2022-10-11T14:47:35Z) - Generating Game Levels of Diverse Behaviour Engagement [2.5739833468005595]
Experimental studies on emphSuper Mario Bros. indicate that using the same evaluation metrics but agents with different personas can generate levels for particular persona.
It implies that, for simple games, using a game-playing agent of specific player archetype as a level tester is probably all we need to generate levels of diverse behaviour engagement.
arXiv Detail & Related papers (2022-07-05T15:08:12Z) - Predicting Game Engagement and Difficulty Using AI Players [3.0501851690100277]
This paper presents a novel approach to automated playtesting for the prediction of human player behavior and experience.
It has previously been demonstrated that Deep Reinforcement Learning game-playing agents can predict both game difficulty and player engagement.
We improve this approach by enhancing DRL with Monte Carlo Tree Search (MCTS)
arXiv Detail & Related papers (2021-07-26T09:31:57Z) - From Motor Control to Team Play in Simulated Humanoid Football [56.86144022071756]
We train teams of physically simulated humanoid avatars to play football in a realistic virtual environment.
In a sequence of stages, players first learn to control a fully articulated body to perform realistic, human-like movements.
They then acquire mid-level football skills such as dribbling and shooting.
Finally, they develop awareness of others and play as a team, bridging the gap between low-level motor control at a timescale of milliseconds.
arXiv Detail & Related papers (2021-05-25T20:17:10Z) - An Empirical Study on the Generalization Power of Neural Representations
Learned via Visual Guessing Games [79.23847247132345]
This work investigates how well an artificial agent can benefit from playing guessing games when later asked to perform on novel NLP downstream tasks such as Visual Question Answering (VQA)
We propose two ways to exploit playing guessing games: 1) a supervised learning scenario in which the agent learns to mimic successful guessing games and 2) a novel way for an agent to play by itself, called Self-play via Iterated Experience Learning (SPIEL)
arXiv Detail & Related papers (2021-01-31T10:30:48Z) - Multi-Agent Collaboration via Reward Attribution Decomposition [75.36911959491228]
We propose Collaborative Q-learning (CollaQ) that achieves state-of-the-art performance in the StarCraft multi-agent challenge.
CollaQ is evaluated on various StarCraft Attribution maps and shows that it outperforms existing state-of-the-art techniques.
arXiv Detail & Related papers (2020-10-16T17:42:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.