Learning to Generate Levels by Imitating Evolution
- URL: http://arxiv.org/abs/2206.05497v1
- Date: Sat, 11 Jun 2022 10:44:57 GMT
- Title: Learning to Generate Levels by Imitating Evolution
- Authors: Ahmed Khalifa, Michael Cerny Green, Julian Togelius
- Abstract summary: We introduce a new type of iterative level generator using machine learning.
We train a model to imitate the evolutionary process and use the model to generate levels.
This trained model is able to modify noisy levels sequentially to create better levels without the need for a fitness function.
- Score: 7.110423254122942
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Search-based procedural content generation (PCG) is a well-known method used
for level generation in games. Its key advantage is that it is generic and able
to satisfy functional constraints. However, due to the heavy computational
costs to run these algorithms online, search-based PCG is rarely utilized for
real-time generation. In this paper, we introduce a new type of iterative level
generator using machine learning. We train a model to imitate the evolutionary
process and use the model to generate levels. This trained model is able to
modify noisy levels sequentially to create better levels without the need for a
fitness function during inference. We evaluate our trained models on a 2D maze
generation task. We compare several different versions of the method: training
the models either at the end of evolution (normal evolution) or every 100
generations (assisted evolution) and using the model as a mutation function
during evolution. Using the assisted evolution process, the final trained
models are able to generate mazes with a success rate of 99% and high diversity
of 86%. This work opens the door to a new way of learning level generators
guided by the evolutionary process and perhaps will increase the adoption of
search-based PCG in the game industry.
Related papers
- PCGRL+: Scaling, Control and Generalization in Reinforcement Learning Level Generators [2.334978724544296]
Procedural Content Generation via Reinforcement Learning (PCGRL) has been introduced as a means by which controllable designer agents can be trained.
PCGRL offers a unique set of affordances for game designers, but it is constrained by the compute-intensive process of training RL agents.
We implement several PCGRL environments in Jax so that all aspects of learning and simulation happen in parallel on the GPU.
arXiv Detail & Related papers (2024-08-22T16:30:24Z) - LVNS-RAVE: Diversified audio generation with RAVE and Latent Vector Novelty Search [0.5624791703748108]
We propose LVNS-RAVE, a method to combine Evolutionary Algorithms and Generative Deep Learning to produce realistic sounds.
The proposed algorithm can be a creative tool for sound artists and musicians.
arXiv Detail & Related papers (2024-04-22T10:20:41Z) - LLM Guided Evolution - The Automation of Models Advancing Models [0.0]
"Guided Evolution" (GE) is a novel framework that diverges from traditional machine learning approaches.
"Evolution of Thought" (EoT) enhances GE by enabling LLMs to reflect on and learn from the outcomes of previous mutations.
Our application of GE in evolving the ExquisiteNetV2 model demonstrates its efficacy.
arXiv Detail & Related papers (2024-03-18T03:44:55Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - Learning to Jump: Thinning and Thickening Latent Counts for Generative
Modeling [69.60713300418467]
Learning to jump is a general recipe for generative modeling of various types of data.
We demonstrate when learning to jump is expected to perform comparably to learning to denoise, and when it is expected to perform better.
arXiv Detail & Related papers (2023-05-28T05:38:28Z) - Improving Non-autoregressive Generation with Mixup Training [51.61038444990301]
We present a non-autoregressive generation model based on pre-trained transformer models.
We propose a simple and effective iterative training method called MIx Source and pseudo Target.
Our experiments on three generation benchmarks including question generation, summarization and paraphrase generation, show that the proposed framework achieves the new state-of-the-art results.
arXiv Detail & Related papers (2021-10-21T13:04:21Z) - Evolving Evolutionary Algorithms with Patterns [0.0]
The model is based on the Multi Expression Programming (MEP) technique.
Several evolutionary algorithms for function optimization are evolved by using the considered model.
arXiv Detail & Related papers (2021-10-10T16:26:20Z) - Epigenetic evolution of deep convolutional models [81.21462458089142]
We build upon a previously proposed neuroevolution framework to evolve deep convolutional models.
We propose a convolutional layer layout which allows kernels of different shapes and sizes to coexist within the same layer.
The proposed layout enables the size and shape of individual kernels within a convolutional layer to be evolved with a corresponding new mutation operator.
arXiv Detail & Related papers (2021-04-12T12:45:16Z) - AdaLead: A simple and robust adaptive greedy search algorithm for
sequence design [55.41644538483948]
We develop an easy-to-directed, scalable, and robust evolutionary greedy algorithm (AdaLead)
AdaLead is a remarkably strong benchmark that out-competes more complex state of the art approaches in a variety of biologically motivated sequence design challenges.
arXiv Detail & Related papers (2020-10-05T16:40:38Z) - Lineage Evolution Reinforcement Learning [15.469857142001482]
Lineage evolution reinforcement learning is a derivative algorithm which accords with the general agent population learning system.
Our experiments show that the idea of evolution with lineage improves the performance of original reinforcement learning algorithm in some games in Atari 2600.
arXiv Detail & Related papers (2020-09-26T11:58:16Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv Detail & Related papers (2020-03-06T19:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.