Experience-Driven PCG via Reinforcement Learning: A Super Mario Bros
Study
- URL: http://arxiv.org/abs/2106.15877v1
- Date: Wed, 30 Jun 2021 08:10:45 GMT
- Title: Experience-Driven PCG via Reinforcement Learning: A Super Mario Bros
Study
- Authors: Tianye Shu, Jialin Liu, Georgios N. Yannakakis
- Abstract summary: The framework is tested initially in the Super Mario Bros game.
The correctness of the generation is ensured by a neural net-assisted evolutionary level repairer.
Our proposed framework is capable of generating endless, playable Super Mario Bros levels.
- Score: 2.2215852332444905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a procedural content generation (PCG) framework at the
intersections of experience-driven PCG and PCG via reinforcement learning,
named ED(PCG)RL, EDRL in short. EDRL is able to teach RL designers to generate
endless playable levels in an online manner while respecting particular
experiences for the player as designed in the form of reward functions. The
framework is tested initially in the Super Mario Bros game. In particular, the
RL designers of Super Mario Bros generate and concatenate level segments while
considering the diversity among the segments. The correctness of the generation
is ensured by a neural net-assisted evolutionary level repairer and the
playability of the whole level is determined through AI-based testing. Our
agents in this EDRL implementation learn to maximise a quantification of
Koster's principle of fun by moderating the degree of diversity across level
segments. Moreover, we test their ability to design fun levels that are diverse
over time and playable. Our proposed framework is capable of generating
endless, playable Super Mario Bros levels with varying degrees of fun,
deviation from earlier segments, and playability. EDRL can be generalised to
any game that is built as a segment-based sequential process and features a
built-in compressed representation of its game content.
Related papers
- PCGRL+: Scaling, Control and Generalization in Reinforcement Learning Level Generators [2.334978724544296]
Procedural Content Generation via Reinforcement Learning (PCGRL) has been introduced as a means by which controllable designer agents can be trained.
PCGRL offers a unique set of affordances for game designers, but it is constrained by the compute-intensive process of training RL agents.
We implement several PCGRL environments in Jax so that all aspects of learning and simulation happen in parallel on the GPU.
arXiv Detail & Related papers (2024-08-22T16:30:24Z) - A Benchmark Environment for Offline Reinforcement Learning in Racing Games [54.83171948184851]
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL)
This paper introduces OfflineMania, a novel environment for ORL research.
It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine.
arXiv Detail & Related papers (2024-07-12T16:44:03Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - Online Game Level Generation from Music [10.903226537887557]
OPARL is built upon the experience-driven reinforcement learning and controllable reinforcement learning.
A novel control policy based on local search and k-nearest neighbours is proposed and integrated into OPARL to control the level generator.
Results of simulation-based experiments show that our implementation of OPARL is competent to generate playable levels with difficulty degree matched to the energy'' dynamic of music for different artificial players in an online fashion.
arXiv Detail & Related papers (2022-07-12T02:44:50Z) - Multi-Game Decision Transformers [49.257185338595434]
We show that a single transformer-based model can play a suite of up to 46 Atari games simultaneously at close-to-human performance.
We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning.
We find that our Multi-Game Decision Transformer models offer the best scalability and performance.
arXiv Detail & Related papers (2022-05-30T16:55:38Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - DeepCrawl: Deep Reinforcement Learning for Turn-based Strategy Games [137.86426963572214]
We introduce DeepCrawl, a fully-playable Roguelike prototype for iOS and Android in which all agents are controlled by policy networks trained using Deep Reinforcement Learning (DRL)
Our aim is to understand whether recent advances in DRL can be used to develop convincing behavioral models for non-player characters in videogames.
arXiv Detail & Related papers (2020-12-03T13:53:29Z) - Illuminating Mario Scenes in the Latent Space of a Generative
Adversarial Network [11.055580854275474]
We show how designers may specify gameplay measures to our system and extract high-quality (playable) levels with a diverse range of level mechanics.
An online user study shows how the different mechanics of the automatically generated levels affect subjective ratings of their perceived difficulty and appearance.
arXiv Detail & Related papers (2020-07-11T03:38:06Z) - The NetHack Learning Environment [79.06395964379107]
We present the NetHack Learning Environment (NLE), a procedurally generated rogue-like environment for Reinforcement Learning research.
We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL.
We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration.
arXiv Detail & Related papers (2020-06-24T14:12:56Z) - Interactive Evolution and Exploration Within Latent Level-Design Space
of Generative Adversarial Networks [8.091708140619946]
Latent Variable Evolution (LVE) has recently been applied to game levels.
This paper introduces a tool for interactive LVE of tile-based levels for games.
The tool also allows for direct exploration of the latent dimensions, and allows users to play discovered levels.
arXiv Detail & Related papers (2020-03-31T22:52:17Z) - Controllable Level Blending between Games using Variational Autoencoders [6.217860411034386]
We train a VAE on level data from Super Mario Bros. and Kid Icarus, enabling it to capture the latent space spanning both games.
We then use this space to generate level segments that combine properties of levels from both games.
We argue that these affordances make the VAE-based approach especially suitable for co-creative level design.
arXiv Detail & Related papers (2020-02-27T01:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.