Mixed-Initiative Level Design with RL Brush
- URL: http://arxiv.org/abs/2008.02778v3
- Date: Thu, 25 Feb 2021 20:18:28 GMT
- Title: Mixed-Initiative Level Design with RL Brush
- Authors: Omar Delarosa, Hang Dong, Mindy Ruan, Ahmed Khalifa, Julian Togelius
- Abstract summary: This paper introduces RL Brush, a level-editing tool for tile-based games designed for mixed-initiative co-creation.
The tool uses reinforcement-learning-based models to augment manual human level-design through the addition of AI-generated suggestions.
- Score: 8.979403815167178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces RL Brush, a level-editing tool for tile-based games
designed for mixed-initiative co-creation. The tool uses
reinforcement-learning-based models to augment manual human level-design
through the addition of AI-generated suggestions. Here, we apply RL Brush to
designing levels for the classic puzzle game Sokoban. We put the tool online
and tested it in 39 different sessions. The results show that users using the
AI suggestions stay around longer and their created levels on average are more
playable and more complex than without.
Related papers
- A Benchmark Environment for Offline Reinforcement Learning in Racing Games [54.83171948184851]
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL)
This paper introduces OfflineMania, a novel environment for ORL research.
It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine.
arXiv Detail & Related papers (2024-07-12T16:44:03Z) - Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - Lode Enhancer: Level Co-creation Through Scaling [6.739485960737326]
We explore AI-powered upscaling as a design assistance tool in the context of creating 2D game levels.
Deep neural networks are used to upscale artificially downscaled patches of levels from the puzzle platformer game Lode Runner.
arXiv Detail & Related papers (2023-08-03T05:23:07Z) - Balancing of competitive two-player Game Levels with Reinforcement
Learning [0.2793095554369281]
We propose an architecture for automated balancing of tile-based levels within the recently introduced PCGRL framework.
Our architecture is divided into three parts: (1) a level generator, (2) a balancing agent and, (3) a reward modeling simulation.
We show that this approach is capable to teach an agent how to alter a level for balancing better and faster than plain PCGRL.
arXiv Detail & Related papers (2023-06-07T13:40:20Z) - Automated Graph Genetic Algorithm based Puzzle Validation for Faster
Game Desig [69.02688684221265]
This paper presents an evolutionary algorithm, empowered by expert-knowledge informeds, for solving logical puzzles in video games efficiently.
We discuss multiple variations of hybrid genetic approaches for constraint satisfaction problems that allow us to find a diverse set of near-optimal solutions for puzzles.
arXiv Detail & Related papers (2023-02-17T18:15:33Z) - ElegantRL-Podracer: Scalable and Elastic Library for Cloud-Native Deep
Reinforcement Learning [141.58588761593955]
We present a library ElegantRL-podracer for cloud-native deep reinforcement learning.
It efficiently supports millions of cores to carry out massively parallel training at multiple levels.
At a low-level, each pod simulates agent-environment interactions in parallel by fully utilizing nearly 7,000 GPU cores in a single GPU.
arXiv Detail & Related papers (2021-12-11T06:31:21Z) - Experience-Driven PCG via Reinforcement Learning: A Super Mario Bros
Study [2.2215852332444905]
The framework is tested initially in the Super Mario Bros game.
The correctness of the generation is ensured by a neural net-assisted evolutionary level repairer.
Our proposed framework is capable of generating endless, playable Super Mario Bros levels.
arXiv Detail & Related papers (2021-06-30T08:10:45Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - The NetHack Learning Environment [79.06395964379107]
We present the NetHack Learning Environment (NLE), a procedurally generated rogue-like environment for Reinforcement Learning research.
We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL.
We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration.
arXiv Detail & Related papers (2020-06-24T14:12:56Z) - Interactive Evolution and Exploration Within Latent Level-Design Space
of Generative Adversarial Networks [8.091708140619946]
Latent Variable Evolution (LVE) has recently been applied to game levels.
This paper introduces a tool for interactive LVE of tile-based levels for games.
The tool also allows for direct exploration of the latent dimensions, and allows users to play discovered levels.
arXiv Detail & Related papers (2020-03-31T22:52:17Z) - Controllable Level Blending between Games using Variational Autoencoders [6.217860411034386]
We train a VAE on level data from Super Mario Bros. and Kid Icarus, enabling it to capture the latent space spanning both games.
We then use this space to generate level segments that combine properties of levels from both games.
We argue that these affordances make the VAE-based approach especially suitable for co-creative level design.
arXiv Detail & Related papers (2020-02-27T01:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.