Game Level Clustering and Generation using Gaussian Mixture VAEs
- URL: http://arxiv.org/abs/2009.09811v1
- Date: Sat, 22 Aug 2020 15:07:30 GMT
- Title: Game Level Clustering and Generation using Gaussian Mixture VAEs
- Authors: Zhihan Yang, Anurag Sarkar, Seth Cooper
- Abstract summary: Variational autoencoders (VAEs) have been shown to be able to generate game levels but require manual exploration of the learned latent space to generate outputs with desired attributes.
In this paper, we apply a variant of the VAE which imposes a mixture of Gaussians (GM) on the latent space, unlike regular VAEs which impose a unimodal Gaussian.
This allows GMVAEs to cluster levels in an unsupervised manner using the components of the GM and then generate new levels using the learned components.
- Score: 6.217860411034386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational autoencoders (VAEs) have been shown to be able to generate game
levels but require manual exploration of the learned latent space to generate
outputs with desired attributes. While conditional VAEs address this by
allowing generation to be conditioned on labels, such labels have to be
provided during training and thus require prior knowledge which may not always
be available. In this paper, we apply Gaussian Mixture VAEs (GMVAEs), a variant
of the VAE which imposes a mixture of Gaussians (GM) on the latent space,
unlike regular VAEs which impose a unimodal Gaussian. This allows GMVAEs to
cluster levels in an unsupervised manner using the components of the GM and
then generate new levels using the learned components. We demonstrate our
approach with levels from Super Mario Bros., Kid Icarus and Mega Man. Our
results show that the learned components discover and cluster level structures
and patterns and can be used to generate levels with desired characteristics.
Related papers
- The Mamba in the Llama: Distilling and Accelerating Hybrid Models [76.64055251296548]
We show that it is feasible to distill large Transformers into linear RNNs by reusing the linear projection weights from attention layers with academic GPU resources.
The resulting hybrid model, which incorporates a quarter of the attention layers, achieves performance comparable to the original Transformer in chat benchmarks.
arXiv Detail & Related papers (2024-08-27T17:56:11Z) - GaussianDreamerPro: Text to Manipulable 3D Gaussians with Highly Enhanced Quality [99.63429416013713]
3D-GS has achieved great success in reconstructing and rendering real-world scenes.
To transfer the high rendering quality to generation tasks, a series of research works attempt to generate 3D-Gaussian assets from text.
We propose a novel framework named GaussianDreamerPro to enhance the generation quality.
arXiv Detail & Related papers (2024-06-26T16:12:09Z) - GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models [74.0430727476634]
We propose a new family of segmentation models that rely on a dense generative classifier for the joint distribution p(pixel feature,class)
With a variety of segmentation architectures and backbones, GMMSeg outperforms the discriminative counterparts on closed-set datasets.
GMMSeg even performs well on open-world datasets.
arXiv Detail & Related papers (2022-10-05T05:20:49Z) - Latent Combinational Game Design [4.8951183832371]
We present an approach for generating playable games that blend a given set of games in a desired combination using deep generative latent variable models.
Results show that these approaches can generate playable games that blend the input games in specified combinations.
arXiv Detail & Related papers (2022-06-28T17:54:17Z) - Illuminating Diverse Neural Cellular Automata for Level Generation [5.294599496581041]
We present a method of generating a collection of neural cellular automata (NCA) to design video game levels.
Our approach can train diverse level generators, whose output levels vary based on aesthetic or functional criteria.
We apply our new method to generate level generators for several 2D tile-based games: a maze game, Sokoban, and Zelda.
arXiv Detail & Related papers (2021-09-12T11:17:31Z) - Regularizing Transformers With Deep Probabilistic Layers [62.997667081978825]
In this work, we demonstrate how the inclusion of deep generative models within BERT can bring more versatile models.
We prove its effectiveness not only in Transformers but also in the most relevant encoder-decoder based LM, seq2seq with and without attention.
arXiv Detail & Related papers (2021-08-23T10:17:02Z) - Dungeon and Platformer Level Blending and Generation using Conditional
VAEs [7.919213739992465]
conditional VAEs (CVAEs) were recently shown capable of generating output that can be modified using labels specifying desired content.
We expand these works by using CVAEs for generating whole platformer and dungeon levels, and blending levels across these genres.
arXiv Detail & Related papers (2021-06-17T05:46:03Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z) - Conditional Level Generation and Game Blending [6.217860411034386]
We build on prior research by exploring the level design affordances and applications enabled by conditional VAEs (CVAEs)
CVAEs augment VAEs by allowing them to be trained using labeled data, thus enabling outputs to be generated conditioned on some input.
We show that such models can assist in level design by generating levels with desired level elements and patterns as well as producing blended levels with desired combinations of games.
arXiv Detail & Related papers (2020-10-13T00:28:20Z) - Generalized Zero-Shot Learning via VAE-Conditioned Generative Flow [83.27681781274406]
Generalized zero-shot learning aims to recognize both seen and unseen classes by transferring knowledge from semantic descriptions to visual representations.
Recent generative methods formulate GZSL as a missing data problem, which mainly adopts GANs or VAEs to generate visual features for unseen classes.
We propose a conditional version of generative flows for GZSL, i.e., VAE-Conditioned Generative Flow (VAE-cFlow)
arXiv Detail & Related papers (2020-09-01T09:12:31Z) - Controllable Level Blending between Games using Variational Autoencoders [6.217860411034386]
We train a VAE on level data from Super Mario Bros. and Kid Icarus, enabling it to capture the latent space spanning both games.
We then use this space to generate level segments that combine properties of levels from both games.
We argue that these affordances make the VAE-based approach especially suitable for co-creative level design.
arXiv Detail & Related papers (2020-02-27T01:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.