Personalized Game Difficulty Prediction Using Factorization Machines
- URL: http://arxiv.org/abs/2209.13495v1
- Date: Tue, 6 Sep 2022 08:03:46 GMT
- Title: Personalized Game Difficulty Prediction Using Factorization Machines
- Authors: Jeppe Theiss Kristensen, Christian Guckelsberger, Paolo Burelli,
Perttu H\"am\"al\"ainen
- Abstract summary: We contribute a new approach for personalized difficulty estimation of game levels, borrowing methods from content recommendation.
We are able to predict difficulty as the number of attempts a player requires to pass future game levels, based on observed attempt counts from earlier levels and levels played by others.
Our results suggest that FMs are a promising tool enabling game designers to both optimize player experience and learn more about their players and the game.
- Score: 0.9558392439655011
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The accurate and personalized estimation of task difficulty provides many
opportunities for optimizing user experience. However, user diversity makes
such difficulty estimation hard, in that empirical measurements from some user
sample do not necessarily generalize to others. In this paper, we contribute a
new approach for personalized difficulty estimation of game levels, borrowing
methods from content recommendation. Using factorization machines (FM) on a
large dataset from a commercial puzzle game, we are able to predict difficulty
as the number of attempts a player requires to pass future game levels, based
on observed attempt counts from earlier levels and levels played by others. In
addition to performance and scalability, FMs offer the benefit that the learned
latent variable model can be used to study the characteristics of both players
and game levels that contribute to difficulty. We compare the approach to a
simple non-personalized baseline and a personalized prediction using Random
Forests. Our results suggest that FMs are a promising tool enabling game
designers to both optimize player experience and learn more about their players
and the game.
Related papers
- Difficulty Modelling in Mobile Puzzle Games: An Empirical Study on
Different Methods to Combine Player Analytics and Simulated Data [0.0]
A common practice consists of creating metrics out of data collected by player interactions with the content.
This allows for estimation only after the content is released and does not consider the characteristics of potential future players.
In this article, we present a number of potential solutions for the estimation of difficulty under such conditions.
arXiv Detail & Related papers (2024-01-30T20:51:42Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Modeling Player Personality Factors from In-Game Behavior and Affective
Expression [17.01727448431269]
We explore possibilities to predict a series of player personality questionnaire metrics from recorded in-game behavior.
We predict a wide variety of personality metrics from seven established questionnaires across 62 players over 60 minute gameplay of a customized version of the role-playing game Fallout: New Vegas.
arXiv Detail & Related papers (2023-08-27T22:59:08Z) - GameEval: Evaluating LLMs on Conversational Games [93.40433639746331]
We propose GameEval, a novel approach to evaluating large language models (LLMs)
GameEval treats LLMs as game players and assigns them distinct roles with specific goals achieved by launching conversations of various forms.
We show that GameEval can effectively differentiate the capabilities of various LLMs, providing a comprehensive assessment of their integrated abilities to solve complex problems.
arXiv Detail & Related papers (2023-08-19T14:33:40Z) - Ordinal Regression for Difficulty Estimation of StepMania Levels [18.944506234623862]
We formalize and analyze the difficulty prediction task on StepMania levels as an ordinal regression (OR) task.
We evaluate many competitive OR and non-OR models, demonstrating that neural network-based models significantly outperform the state of the art.
We conclude with a user experiment showing our trained models' superiority over human labeling.
arXiv Detail & Related papers (2023-01-23T15:30:01Z) - JECC: Commonsense Reasoning Tasks Derived from Interactive Fictions [75.42526766746515]
We propose a new commonsense reasoning dataset based on human's Interactive Fiction (IF) gameplay walkthroughs.
Our dataset focuses on the assessment of functional commonsense knowledge rules rather than factual knowledge.
Experiments show that the introduced dataset is challenging to previous machine reading models as well as the new large language models.
arXiv Detail & Related papers (2022-10-18T19:20:53Z) - CommonsenseQA 2.0: Exposing the Limits of AI through Gamification [126.85096257968414]
We construct benchmarks that test the abilities of modern natural language understanding models.
In this work, we propose gamification as a framework for data construction.
arXiv Detail & Related papers (2022-01-14T06:49:15Z) - Statistical Modelling of Level Difficulty in Puzzle Games [0.0]
We formalise a model of level difficulty for puzzle games that goes beyond the classical probability of success.
The model is fitted and evaluated on a dataset collected from the game Lily's Garden by Tactile Games.
arXiv Detail & Related papers (2021-07-05T13:47:28Z) - DL-DDA -- Deep Learning based Dynamic Difficulty Adjustment with UX and
Gameplay constraints [0.8594140167290096]
We propose a method that automatically optimize user experience while taking into consideration other players and macro constraints imposed by the game.
We provide empirical results of an internal experiment that was done on $200,000$ and was found to outperform the corresponding manuals crafted by game design experts.
arXiv Detail & Related papers (2021-06-06T09:47:15Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.