Finding Game Levels with the Right Difficulty in a Few Trials through
Intelligent Trial-and-Error
- URL: http://arxiv.org/abs/2005.07677v2
- Date: Thu, 25 Jun 2020 12:55:08 GMT
- Title: Finding Game Levels with the Right Difficulty in a Few Trials through
Intelligent Trial-and-Error
- Authors: Miguel Gonz\'alez-Duque, Rasmus Berg Palm, David Ha, Sebastian Risi
- Abstract summary: Methods for dynamic difficulty adjustment allow games to be tailored to particular players to maximize their engagement.
Current methods often only modify a limited set of game features such as the difficulty of the opponents, or the availability of resources.
This paper presents a method that can generate and search for complete levels with a specific target difficulty in only a few trials.
- Score: 16.297059109611798
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Methods for dynamic difficulty adjustment allow games to be tailored to
particular players to maximize their engagement. However, current methods often
only modify a limited set of game features such as the difficulty of the
opponents, or the availability of resources. Other approaches, such as
experience-driven Procedural Content Generation (PCG), can generate complete
levels with desired properties such as levels that are neither too hard nor too
easy, but require many iterations. This paper presents a method that can
generate and search for complete levels with a specific target difficulty in
only a few trials. This advance is enabled by through an Intelligent
Trial-and-Error algorithm, originally developed to allow robots to adapt
quickly. Our algorithm first creates a large variety of different levels that
vary across predefined dimensions such as leniency or map coverage. The
performance of an AI playing agent on these maps gives a proxy for how
difficult the level would be for another AI agent (e.g. one that employs Monte
Carlo Tree Search instead of Greedy Tree Search); using this information, a
Bayesian Optimization procedure is deployed, updating the difficulty of the
prior map to reflect the ability of the agent. The approach can reliably find
levels with a specific target difficulty for a variety of planning agents in
only a few trials, while maintaining an understanding of their skill landscape.
Related papers
- Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - Monte-Carlo Tree Search for Multi-Agent Pathfinding: Preliminary Results [60.4817465598352]
We introduce an original variant of Monte-Carlo Tree Search (MCTS) tailored to multi-agent pathfinding.
Specifically, we use individual paths to assist the agents with the the goal-reaching behavior.
We also use a dedicated decomposition technique to reduce the branching factor of the tree search procedure.
arXiv Detail & Related papers (2023-07-25T12:33:53Z) - Personalized Game Difficulty Prediction Using Factorization Machines [0.9558392439655011]
We contribute a new approach for personalized difficulty estimation of game levels, borrowing methods from content recommendation.
We are able to predict difficulty as the number of attempts a player requires to pass future game levels, based on observed attempt counts from earlier levels and levels played by others.
Our results suggest that FMs are a promising tool enabling game designers to both optimize player experience and learn more about their players and the game.
arXiv Detail & Related papers (2022-09-06T08:03:46Z) - Persona-driven Dominant/Submissive Map (PDSM) Generation for Tutorials [5.791285538179053]
We present a method for automated persona-driven video game tutorial level generation.
We use procedural personas to calculate the behavioral characteristics of levels which are evolved.
Within this work, we show that the generated maps can strongly encourage or discourage different persona-like behaviors.
arXiv Detail & Related papers (2022-04-11T16:01:48Z) - Towards Objective Metrics for Procedurally Generated Video Game Levels [2.320417845168326]
We introduce two simulation-based evaluation metrics to measure the diversity and difficulty of generated levels.
We demonstrate that our diversity metric is more robust to changes in level size and representation than current methods.
The difficulty metric shows promise, as it correlates with existing estimates of difficulty in one of the tested domains, but it does face some challenges in the other domain.
arXiv Detail & Related papers (2022-01-25T14:13:50Z) - CommonsenseQA 2.0: Exposing the Limits of AI through Gamification [126.85096257968414]
We construct benchmarks that test the abilities of modern natural language understanding models.
In this work, we propose gamification as a framework for data construction.
arXiv Detail & Related papers (2022-01-14T06:49:15Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Learning to Play Imperfect-Information Games by Imitating an Oracle
Planner [77.67437357688316]
We consider learning to play multiplayer imperfect-information games with simultaneous moves and large state-action spaces.
Our approach is based on model-based planning.
We show that the planner is able to discover efficient playing strategies in the games of Clash Royale and Pommerman.
arXiv Detail & Related papers (2020-12-22T17:29:57Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.