Learning to Generate Levels From Nothing
- URL: http://arxiv.org/abs/2002.05259v2
- Date: Mon, 9 Aug 2021 19:10:52 GMT
- Title: Learning to Generate Levels From Nothing
- Authors: Philip Bontrager and Julian Togelius
- Abstract summary: We propose Generative Playing Networks which design levels for itself to play.
The algorithm is built in two parts; an agent that learns to play game levels, and a generator that learns the distribution of playable levels.
We demonstrate the capability of this framework by training an agent and level generator for a 2D dungeon crawler game.
- Score: 5.2508303190856624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning for procedural content generation has recently become an
active area of research. Levels vary in both form and function and are mostly
unrelated to each other across games. This has made it difficult to assemble
suitably large datasets to bring machine learning to level design in the same
way as it's been used for image generation. Here we propose Generative Playing
Networks which design levels for itself to play. The algorithm is built in two
parts; an agent that learns to play game levels, and a generator that learns
the distribution of playable levels. As the agent learns and improves its
ability, the space of playable levels, as defined by the agent, grows. The
generator targets the agent's playability estimates to then update its
understanding of what constitutes a playable level. We call this process of
learning the distribution of data found through self-discovery with an
environment, self-supervised inductive learning. Unlike previous approaches to
procedural content generation, Generative Playing Networks are end-to-end
differentiable and do not require human-designed examples or domain knowledge.
We demonstrate the capability of this framework by training an agent and level
generator for a 2D dungeon crawler game.
Related papers
- Joint Level Generation and Translation Using Gameplay Videos [0.9645196221785693]
Procedural Content Generation via Machine Learning (PCGML) faces a significant hurdle that sets it apart from other fields, such as image or text generation.
Many existing methods for procedural level generation via machine learning require a secondary representation besides level images.
We develop a novel multi-tail framework that learns to perform simultaneous level translation and generation.
arXiv Detail & Related papers (2023-06-29T03:46:44Z) - Online Game Level Generation from Music [10.903226537887557]
OPARL is built upon the experience-driven reinforcement learning and controllable reinforcement learning.
A novel control policy based on local search and k-nearest neighbours is proposed and integrated into OPARL to control the level generator.
Results of simulation-based experiments show that our implementation of OPARL is competent to generate playable levels with difficulty degree matched to the energy'' dynamic of music for different artificial players in an online fashion.
arXiv Detail & Related papers (2022-07-12T02:44:50Z) - MineDojo: Building Open-Ended Embodied Agents with Internet-Scale
Knowledge [70.47759528596711]
We introduce MineDojo, a new framework built on the popular Minecraft game.
We propose a novel agent learning algorithm that leverages large pre-trained video-language models as a learned reward function.
Our agent is able to solve a variety of open-ended tasks specified in free-form language without any manually designed dense shaping reward.
arXiv Detail & Related papers (2022-06-17T15:53:05Z) - Procedural Content Generation using Neuroevolution and Novelty Search
for Diverse Video Game Levels [2.320417845168326]
Procedurally generated video game content has the potential to drastically reduce the content creation budget of game developers and large studios.
However, adoption is hindered by limitations such as slow generation, as well as low quality and diversity of content.
We introduce an evolutionary search-based approach for evolving level generators using novelty search to procedurally generate diverse levels in real time.
arXiv Detail & Related papers (2022-04-14T12:54:32Z) - Evaluating Continual Learning Algorithms by Generating 3D Virtual
Environments [66.83839051693695]
Continual learning refers to the ability of humans and animals to incrementally learn over time in a given environment.
We propose to leverage recent advances in 3D virtual environments in order to approach the automatic generation of potentially life-long dynamic scenes with photo-realistic appearance.
A novel element of this paper is that scenes are described in a parametric way, thus allowing the user to fully control the visual complexity of the input stream the agent perceives.
arXiv Detail & Related papers (2021-09-16T10:37:21Z) - Pairing Character Classes in a Deathmatch Shooter Game via a
Deep-Learning Surrogate Model [2.323282558557423]
The paper explores how deep learning can help build a model which combines the game level structure and the game's character class parameters as input and the gameplay outcomes as output.
The model is then used to generate classes for specific levels and for a desired game outcome, such as balanced matches of short duration.
arXiv Detail & Related papers (2021-03-29T09:34:24Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - Co-generation of game levels and game-playing agents [4.4447051343759965]
This paper introduces a POET-Inspired Neuroevolutionary System for KreativitY (PINSKY) in games.
Results demonstrate the ability of PINSKY to generate curricula of game levels, opening up a promising new avenue for research at the intersection of content generation and artificial life.
arXiv Detail & Related papers (2020-07-16T17:48:05Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.