Adapting Procedural Content Generation to Player Personas Through
Evolution
- URL: http://arxiv.org/abs/2112.04406v1
- Date: Tue, 7 Dec 2021 16:26:33 GMT
- Title: Adapting Procedural Content Generation to Player Personas Through
Evolution
- Authors: Pedro M. Fernandes, Jonathan J{\o}rgensen, Niels N. T. G. Poldervaart
- Abstract summary: We propose an architecture using persona agents and experience metrics, which enables evolving procedurally generated levels tailored for particular player personas.
Using our game, "Grave Rave", we demonstrate that this approach successfully adapts to four rule-based persona agents over three different experience metrics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatically adapting game content to players opens new doors for game
development. In this paper we propose an architecture using persona agents and
experience metrics, which enables evolving procedurally generated levels
tailored for particular player personas. Using our game, "Grave Rave", we
demonstrate that this approach successfully adapts to four rule-based persona
agents over three different experience metrics. Furthermore, the adaptation is
shown to be specific in nature, meaning that the levels are persona-conscious,
and not just general optimizations with regard to the selected metric.
Related papers
- Evolutionary Tabletop Game Design: A Case Study in the Risk Game [0.1474723404975345]
This work proposes an extension of the approach for tabletop games, evaluating the process by generating variants of Risk.
We achieved this using a genetic algorithm to evolve the chosen parameters, as well as a rules-based agent to test the games.
Results show the creation of new variations of the original game with smaller maps, resulting in shorter matches.
arXiv Detail & Related papers (2023-10-30T20:53:26Z) - Personalized Game Difficulty Prediction Using Factorization Machines [0.9558392439655011]
We contribute a new approach for personalized difficulty estimation of game levels, borrowing methods from content recommendation.
We are able to predict difficulty as the number of attempts a player requires to pass future game levels, based on observed attempt counts from earlier levels and levels played by others.
Our results suggest that FMs are a promising tool enabling game designers to both optimize player experience and learn more about their players and the game.
arXiv Detail & Related papers (2022-09-06T08:03:46Z) - Generating Game Levels of Diverse Behaviour Engagement [2.5739833468005595]
Experimental studies on emphSuper Mario Bros. indicate that using the same evaluation metrics but agents with different personas can generate levels for particular persona.
It implies that, for simple games, using a game-playing agent of specific player archetype as a level tester is probably all we need to generate levels of diverse behaviour engagement.
arXiv Detail & Related papers (2022-07-05T15:08:12Z) - Multi-Game Decision Transformers [49.257185338595434]
We show that a single transformer-based model can play a suite of up to 46 Atari games simultaneously at close-to-human performance.
We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning.
We find that our Multi-Game Decision Transformer models offer the best scalability and performance.
arXiv Detail & Related papers (2022-05-30T16:55:38Z) - Persona-driven Dominant/Submissive Map (PDSM) Generation for Tutorials [5.791285538179053]
We present a method for automated persona-driven video game tutorial level generation.
We use procedural personas to calculate the behavioral characteristics of levels which are evolved.
Within this work, we show that the generated maps can strongly encourage or discourage different persona-like behaviors.
arXiv Detail & Related papers (2022-04-11T16:01:48Z) - Coach-Player Multi-Agent Reinforcement Learning for Dynamic Team
Composition [88.26752130107259]
In real-world multiagent systems, agents with different capabilities may join or leave without altering the team's overarching goals.
We propose COPA, a coach-player framework to tackle this problem.
We 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players.
arXiv Detail & Related papers (2021-05-18T17:27:37Z) - Policy Fusion for Adaptive and Customizable Reinforcement Learning
Agents [137.86426963572214]
We show how to combine distinct behavioral policies to obtain a meaningful "fusion" policy.
We propose four different policy fusion methods for combining pre-trained policies.
We provide several practical examples and use-cases for how these methods are indeed useful for video game production and designers.
arXiv Detail & Related papers (2021-04-21T16:08:44Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - Illuminating Mario Scenes in the Latent Space of a Generative
Adversarial Network [11.055580854275474]
We show how designers may specify gameplay measures to our system and extract high-quality (playable) levels with a diverse range of level mechanics.
An online user study shows how the different mechanics of the automatically generated levels affect subjective ratings of their perceived difficulty and appearance.
arXiv Detail & Related papers (2020-07-11T03:38:06Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.