Persona-driven Dominant/Submissive Map (PDSM) Generation for Tutorials
- URL: http://arxiv.org/abs/2204.05217v1
- Date: Mon, 11 Apr 2022 16:01:48 GMT
- Title: Persona-driven Dominant/Submissive Map (PDSM) Generation for Tutorials
- Authors: Michael Cerny Green, Ahmed Khalifa, M Charity, and Julian Togelius
- Abstract summary: We present a method for automated persona-driven video game tutorial level generation.
We use procedural personas to calculate the behavioral characteristics of levels which are evolved.
Within this work, we show that the generated maps can strongly encourage or discourage different persona-like behaviors.
- Score: 5.791285538179053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a method for automated persona-driven video game
tutorial level generation. Tutorial levels are scenarios in which the player
can explore and discover different rules and game mechanics. Procedural
personas can guide generators to create content which encourages or discourages
certain playstyle behaviors. In this system, we use procedural personas to
calculate the behavioral characteristics of levels which are evolved using the
quality-diversity algorithm known as Constrained MAP-Elites. An evolved map's
quality is determined by its simplicity: the simpler it is, the better it is.
Within this work, we show that the generated maps can strongly encourage or
discourage different persona-like behaviors and range from simple solutions to
complex puzzle-levels, making them perfect candidates for a tutorial generative
system.
Related papers
- Generative Personas That Behave and Experience Like Humans [3.611888922173257]
generative AI agents attempt to imitate particular playing behaviors represented as rules, rewards, or human demonstrations.
We extend the notion of behavioral procedural personas to cater for player experience, thus examining generative agents that can both behave and experience their game as humans would.
Our findings suggest that the generated agents exhibit distinctive play styles and experience responses of the human personas they were designed to imitate.
arXiv Detail & Related papers (2022-08-26T12:04:53Z) - CCPT: Automatic Gameplay Testing and Validation with
Curiosity-Conditioned Proximal Trajectories [65.35714948506032]
The Curiosity-Conditioned Proximal Trajectories (CCPT) method combines curiosity and imitation learning to train agents to explore.
We show how CCPT can explore complex environments, discover gameplay issues and design oversights in the process, and recognize and highlight them directly to game designers.
arXiv Detail & Related papers (2022-02-21T09:08:33Z) - Generating Lode Runner Levels by Learning Player Paths with LSTMs [2.199085230546853]
In this paper, we attempt to address problems by learning to generate human-like paths, and then generating levels based on these paths.
We extract player path data from gameplay video, train an LSTM to generate new paths based on this data, and then generate game levels based on this path data.
We demonstrate that our approach leads to more coherent levels for the game Lode Runner in comparison to an existing PCGML approach.
arXiv Detail & Related papers (2021-07-27T00:48:30Z) - Policy Fusion for Adaptive and Customizable Reinforcement Learning
Agents [137.86426963572214]
We show how to combine distinct behavioral policies to obtain a meaningful "fusion" policy.
We propose four different policy fusion methods for combining pre-trained policies.
We provide several practical examples and use-cases for how these methods are indeed useful for video game production and designers.
arXiv Detail & Related papers (2021-04-21T16:08:44Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - Learning Propagation Rules for Attribution Map Generation [146.71503336770886]
We propose a dedicated method to generate attribution maps that allow us to learn the propagation rules automatically.
Specifically, we introduce a learnable plugin module, which enables adaptive propagation rules for each pixel.
The introduced learnable module can be trained under any auto-grad framework with higher-order differential support.
arXiv Detail & Related papers (2020-10-14T16:23:58Z) - Illuminating Mario Scenes in the Latent Space of a Generative
Adversarial Network [11.055580854275474]
We show how designers may specify gameplay measures to our system and extract high-quality (playable) levels with a diverse range of level mechanics.
An online user study shows how the different mechanics of the automatically generated levels affect subjective ratings of their perceived difficulty and appearance.
arXiv Detail & Related papers (2020-07-11T03:38:06Z) - Finding Game Levels with the Right Difficulty in a Few Trials through
Intelligent Trial-and-Error [16.297059109611798]
Methods for dynamic difficulty adjustment allow games to be tailored to particular players to maximize their engagement.
Current methods often only modify a limited set of game features such as the difficulty of the opponents, or the availability of resources.
This paper presents a method that can generate and search for complete levels with a specific target difficulty in only a few trials.
arXiv Detail & Related papers (2020-05-15T17:48:18Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z) - Learning to Generate Levels From Nothing [5.2508303190856624]
We propose Generative Playing Networks which design levels for itself to play.
The algorithm is built in two parts; an agent that learns to play game levels, and a generator that learns the distribution of playable levels.
We demonstrate the capability of this framework by training an agent and level generator for a 2D dungeon crawler game.
arXiv Detail & Related papers (2020-02-12T22:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.