Illuminating Diverse Neural Cellular Automata for Level Generation
- URL: http://arxiv.org/abs/2109.05489v1
- Date: Sun, 12 Sep 2021 11:17:31 GMT
- Title: Illuminating Diverse Neural Cellular Automata for Level Generation
- Authors: Sam Earle, Justin Snider, Matthew C. Fontaine, Stefanos Nikolaidis,
and Julian Togelius
- Abstract summary: We present a method of generating a collection of neural cellular automata (NCA) to design video game levels.
Our approach can train diverse level generators, whose output levels vary based on aesthetic or functional criteria.
We apply our new method to generate level generators for several 2D tile-based games: a maze game, Sokoban, and Zelda.
- Score: 5.294599496581041
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a method of generating a collection of neural cellular automata
(NCA) to design video game levels. While NCAs have so far only been trained via
supervised learning, we present a quality diversity (QD) approach to generating
a collection of NCA level generators. By framing the problem as a QD problem,
our approach can train diverse level generators, whose output levels vary based
on aesthetic or functional criteria. To efficiently generate NCAs, we train
generators via Covariance Matrix Adaptation MAP-Elites (CMA-ME), a quality
diversity algorithm which specializes in continuous search spaces. We apply our
new method to generate level generators for several 2D tile-based games: a maze
game, Sokoban, and Zelda. Our results show that CMA-ME can generate small NCAs
that are diverse yet capable, often satisfying complex solvability criteria for
deterministic agents. We compare against a Compositional Pattern-Producing
Network (CPPN) baseline trained to produce diverse collections of generators
and show that the NCA representation yields a better exploration of
level-space.
Related papers
- Exploring Multiple Neighborhood Neural Cellular Automata (MNNCA) for
Enhanced Texture Learning [0.0]
Cellular Automata (CA) have long been foundational in simulating dynamical systems.
Recent innovations have brought Neural Cellular Automata (NCA) into the realm of deep learning.
NCA allows NCAs to be trained via gradient descent, enabling them to evolve into specific shapes, generate textures, and mimic behaviors such as swarming.
Our research explores enhancing the NCA framework by incorporating multiple neighborhoods and introducing structured noise for seed states.
arXiv Detail & Related papers (2023-10-27T15:16:19Z) - Multi-level Latent Space Structuring for Generative Control [53.240701050423155]
We propose to leverage the StyleGAN generative architecture to devise a new truncation technique.
We do so by learning to re-generate W-space, the extended intermediate latent space of StyleGAN, using a learnable mixture of Gaussians.
The resulting truncation scheme is more faithful to the original untruncated samples and allows a better trade-off between quality and diversity.
arXiv Detail & Related papers (2022-02-11T21:26:17Z) - Hybrid Encoding For Generating Large Scale Game Level Patterns With
Local Variations Using a GAN [5.144809478361604]
We propose a new hybrid approach that evolves CPPNs first, but allows the latent vectors to evolve later, and combines the benefits of both approaches.
These approaches are evaluated in Super Mario Bros. and The Legend of Zelda.
arXiv Detail & Related papers (2021-05-27T06:27:19Z) - Learning Controllable Content Generators [5.5805433423452895]
We train generators capable of producing controllably diverse output, by making them "goal-aware"
We show that the resulting level generators are capable of exploring the space of possible levels in a targeted, controllable manner.
arXiv Detail & Related papers (2021-05-06T22:15:51Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z) - Level Generation for Angry Birds with Sequential VAE and Latent Variable
Evolution [25.262831218008202]
We develop a deep-generative-model-based level generation for the game domain of Angry Birds.
Experiments show that the proposed level generator drastically improves the stability and diversity of generated levels.
arXiv Detail & Related papers (2021-04-13T11:23:39Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Searching towards Class-Aware Generators for Conditional Generative
Adversarial Networks [132.29772160843825]
Conditional Generative Adversarial Networks (cGAN) were designed to generate images based on the provided conditions.
Existing methods have used the same generating architecture for all classes.
This paper presents a novel idea that adopts NAS to find a distinct architecture for each class.
arXiv Detail & Related papers (2020-06-25T07:05:28Z) - Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN [80.17705319689139]
We propose a data-free knowledge amalgamate strategy to craft a well-behaved multi-task student network from multiple single/multi-task teachers.
The proposed method without any training data achieves the surprisingly competitive results, even compared with some full-supervised methods.
arXiv Detail & Related papers (2020-03-20T03:20:52Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv Detail & Related papers (2020-03-06T19:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.