Procedural Content Generation using Neuroevolution and Novelty Search
for Diverse Video Game Levels
- URL: http://arxiv.org/abs/2204.06934v1
- Date: Thu, 14 Apr 2022 12:54:32 GMT
- Title: Procedural Content Generation using Neuroevolution and Novelty Search
for Diverse Video Game Levels
- Authors: Michael Beukman and Christopher W Cleghorn and Steven James
- Abstract summary: Procedurally generated video game content has the potential to drastically reduce the content creation budget of game developers and large studios.
However, adoption is hindered by limitations such as slow generation, as well as low quality and diversity of content.
We introduce an evolutionary search-based approach for evolving level generators using novelty search to procedurally generate diverse levels in real time.
- Score: 2.320417845168326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Procedurally generated video game content has the potential to drastically
reduce the content creation budget of game developers and large studios.
However, adoption is hindered by limitations such as slow generation, as well
as low quality and diversity of content. We introduce an evolutionary
search-based approach for evolving level generators using novelty search to
procedurally generate diverse levels in real time, without requiring training
data or detailed domain-specific knowledge. We test our method on two domains,
and our results show an order of magnitude speedup in generation time compared
to existing methods while obtaining comparable metric scores. We further
demonstrate the ability to generalise to arbitrary-sized levels without
retraining.
Related papers
- DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - On the Convergence of No-Regret Learning Dynamics in Time-Varying Games [89.96815099996132]
We characterize the convergence of optimistic gradient descent (OGD) in time-varying games.
Our framework yields sharp convergence bounds for the equilibrium gap of OGD in zero-sum games.
We also provide new insights on dynamic regret guarantees in static games.
arXiv Detail & Related papers (2023-01-26T17:25:45Z) - Combining Evolutionary Search with Behaviour Cloning for Procedurally
Generated Content [2.7412662946127755]
We consider the problem of procedural content generation for video game levels.
Prior approaches have relied on evolutionary search (ES) methods capable of generating diverse levels.
We propose a framework to tackle the procedural content generation problem that combines the best of ES and RL.
arXiv Detail & Related papers (2022-07-29T16:25:52Z) - Towards Objective Metrics for Procedurally Generated Video Game Levels [2.320417845168326]
We introduce two simulation-based evaluation metrics to measure the diversity and difficulty of generated levels.
We demonstrate that our diversity metric is more robust to changes in level size and representation than current methods.
The difficulty metric shows promise, as it correlates with existing estimates of difficulty in one of the tested domains, but it does face some challenges in the other domain.
arXiv Detail & Related papers (2022-01-25T14:13:50Z) - Evaluating Continual Learning Algorithms by Generating 3D Virtual
Environments [66.83839051693695]
Continual learning refers to the ability of humans and animals to incrementally learn over time in a given environment.
We propose to leverage recent advances in 3D virtual environments in order to approach the automatic generation of potentially life-long dynamic scenes with photo-realistic appearance.
A novel element of this paper is that scenes are described in a parametric way, thus allowing the user to fully control the visual complexity of the input stream the agent perceives.
arXiv Detail & Related papers (2021-09-16T10:37:21Z) - Deep Learning for Procedural Content Generation [14.533560910477693]
A research field centered on content generation in games has existed for more than a decade.
Deep learning has powered a remarkable range of inventions in content production.
This article surveys the various deep learning methods that have been applied to generate game content directly or indirectly.
arXiv Detail & Related papers (2020-10-09T13:08:37Z) - Co-generation of game levels and game-playing agents [4.4447051343759965]
This paper introduces a POET-Inspired Neuroevolutionary System for KreativitY (PINSKY) in games.
Results demonstrate the ability of PINSKY to generate curricula of game levels, opening up a promising new avenue for research at the intersection of content generation and artificial life.
arXiv Detail & Related papers (2020-07-16T17:48:05Z) - Incorporating Music Knowledge in Continual Dataset Augmentation for
Music Generation [69.06413031969674]
Aug-Gen is a method of dataset augmentation for any music generation system trained on a resource-constrained domain.
We apply Aug-Gen to Transformer-based chorale generation in the style of J.S. Bach, and show that this allows for longer training and results in better generative output.
arXiv Detail & Related papers (2020-06-23T21:06:15Z) - Human Motion Transfer from Poses in the Wild [61.6016458288803]
We tackle the problem of human motion transfer, where we synthesize novel motion video for a target person that imitates the movement from a reference video.
It is a video-to-video translation task in which the estimated poses are used to bridge two domains.
We introduce a novel pose-to-video translation framework for generating high-quality videos that are temporally coherent even for in-the-wild pose sequences unseen during training.
arXiv Detail & Related papers (2020-04-07T05:59:53Z) - Non-Adversarial Video Synthesis with Learned Priors [53.26777815740381]
We focus on the problem of generating videos from latent noise vectors, without any reference input frames.
We develop a novel approach that jointly optimize the input latent space, the weights of a recurrent neural network and a generator through non-adversarial learning.
Our approach generates superior quality videos compared to the existing state-of-the-art methods.
arXiv Detail & Related papers (2020-03-21T02:57:33Z) - Learning to Generate Levels From Nothing [5.2508303190856624]
We propose Generative Playing Networks which design levels for itself to play.
The algorithm is built in two parts; an agent that learns to play game levels, and a generator that learns the distribution of playable levels.
We demonstrate the capability of this framework by training an agent and level generator for a 2D dungeon crawler game.
arXiv Detail & Related papers (2020-02-12T22:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.