Predicting Game Difficulty and Churn Without Players
- URL: http://arxiv.org/abs/2008.12937v1
- Date: Sat, 29 Aug 2020 08:37:47 GMT
- Title: Predicting Game Difficulty and Churn Without Players
- Authors: Shaghayegh Roohi (1), Asko Relas (2), Jari Takatalo (2), Henri
Heiskanen (2), Perttu H\"am\"al\"ainen (1) ((1) Aalto University, Espoo,
Finland, (2) Rovio Entertainment, Espoo, Finland)
- Abstract summary: We propose a novel simulation model that is able to predict the per-level churn and pass rates of Angry Birds Dream Blast.
Our work demonstrates that player behavior predictions produced by DRL gameplay can be significantly improved by even a very simple population-level simulation of individual player differences.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel simulation model that is able to predict the per-level
churn and pass rates of Angry Birds Dream Blast, a popular mobile free-to-play
game. Our primary contribution is to combine AI gameplay using Deep
Reinforcement Learning (DRL) with a simulation of how the player population
evolves over the levels. The AI players predict level difficulty, which is used
to drive a player population model with simulated skill, persistence, and
boredom. This allows us to model, e.g., how less persistent and skilled players
are more sensitive to high difficulty, and how such players churn early, which
makes the player population and the relation between difficulty and churn
evolve level by level. Our work demonstrates that player behavior predictions
produced by DRL gameplay can be significantly improved by even a very simple
population-level simulation of individual player differences, without requiring
costly retraining of agents or collecting new DRL gameplay data for each
simulated player.
Related papers
- Personalized Dynamic Difficulty Adjustment -- Imitation Learning Meets Reinforcement Learning [44.99833362998488]
In this work, we explore balancing game difficulty using machine learning-based agents to challenge players based on their current behavior.
This is achieved by a combination of two agents, in which one learns to imitate the player, while the second is trained to beat the first.
In our demo, we investigate the proposed framework for personalized dynamic difficulty adjustment of AI agents in the context of the fighting game AI competition.
arXiv Detail & Related papers (2024-08-13T11:24:12Z) - Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - Behavioural Cloning in VizDoom [1.4999444543328293]
This paper describes methods for training autonomous agents to play the game "Doom 2" through Imitation Learning (IL)
We also explore how Reinforcement Learning (RL) compares to IL for humanness by comparing camera movement and trajectory data.
arXiv Detail & Related papers (2024-01-08T16:15:43Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Automated Play-Testing Through RL Based Human-Like Play-Styles
Generation [0.0]
Reinforcement Learning is a promising answer to the need of automating video game testing.
We present CARMI: a.
Agent with Relative Metrics as Input.
An agent able to emulate the players play-styles, even on previously unseen levels.
arXiv Detail & Related papers (2022-11-29T14:17:20Z) - Mastering the Game of No-Press Diplomacy via Human-Regularized
Reinforcement Learning and Planning [95.78031053296513]
No-press Diplomacy is a complex strategy game involving both cooperation and competition.
We introduce a planning algorithm we call DiL-piKL that regularizes a reward-maximizing policy toward a human imitation-learned policy.
We show that DiL-piKL can be extended into a self-play reinforcement learning algorithm we call RL-DiL-piKL.
arXiv Detail & Related papers (2022-10-11T14:47:35Z) - Generative Personas That Behave and Experience Like Humans [3.611888922173257]
generative AI agents attempt to imitate particular playing behaviors represented as rules, rewards, or human demonstrations.
We extend the notion of behavioral procedural personas to cater for player experience, thus examining generative agents that can both behave and experience their game as humans would.
Our findings suggest that the generated agents exhibit distinctive play styles and experience responses of the human personas they were designed to imitate.
arXiv Detail & Related papers (2022-08-26T12:04:53Z) - Predicting Game Engagement and Difficulty Using AI Players [3.0501851690100277]
This paper presents a novel approach to automated playtesting for the prediction of human player behavior and experience.
It has previously been demonstrated that Deep Reinforcement Learning game-playing agents can predict both game difficulty and player engagement.
We improve this approach by enhancing DRL with Monte Carlo Tree Search (MCTS)
arXiv Detail & Related papers (2021-07-26T09:31:57Z) - Neural MMO v1.3: A Massively Multiagent Game Environment for Training
and Evaluating Neural Networks [48.5733173329785]
We present Neural MMO, a massively multiagent game environment inspired by MMOs.
We discuss our progress on two more general challenges in multiagent systems engineering for AI research: distributed infrastructure and game IO.
arXiv Detail & Related papers (2020-01-31T18:50:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.