Dungeon and Platformer Level Blending and Generation using Conditional
VAEs
- URL: http://arxiv.org/abs/2106.12692v1
- Date: Thu, 17 Jun 2021 05:46:03 GMT
- Title: Dungeon and Platformer Level Blending and Generation using Conditional
VAEs
- Authors: Anurag Sarkar, Seth Cooper
- Abstract summary: conditional VAEs (CVAEs) were recently shown capable of generating output that can be modified using labels specifying desired content.
We expand these works by using CVAEs for generating whole platformer and dungeon levels, and blending levels across these genres.
- Score: 7.919213739992465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational autoencoders (VAEs) have been used in prior works for generating
and blending levels from different games. To add controllability to these
models, conditional VAEs (CVAEs) were recently shown capable of generating
output that can be modified using labels specifying desired content, albeit
working with segments of levels and platformers exclusively. We expand these
works by using CVAEs for generating whole platformer and dungeon levels, and
blending levels across these genres. We show that CVAEs can reliably control
door placement in dungeons and progression direction in platformer levels.
Thus, by using appropriate labels, our approach can generate whole dungeons and
platformer levels of interconnected rooms and segments respectively as well as
levels that blend dungeons and platformers. We demonstrate our approach using
The Legend of Zelda, Metroid, Mega Man and Lode Runner.
Related papers
- GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting [52.150502668874495]
We present GALA3D, generative 3D GAussians with LAyout-guided control, for effective compositional text-to-3D generation.
GALA3D is a user-friendly, end-to-end framework for state-of-the-art scene-level 3D content generation and controllable editing.
arXiv Detail & Related papers (2024-02-11T13:40:08Z) - CommonScenes: Generating Commonsense 3D Indoor Scenes with Scene Graph
Diffusion [83.30168660888913]
We present CommonScenes, a fully generative model that converts scene graphs into corresponding controllable 3D scenes.
Our pipeline consists of two branches, one predicting the overall scene layout via a variational auto-encoder and the other generating compatible shapes.
The generated scenes can be manipulated by editing the input scene graph and sampling the noise in the diffusion model.
arXiv Detail & Related papers (2023-05-25T17:39:13Z) - Multi-Game Decision Transformers [49.257185338595434]
We show that a single transformer-based model can play a suite of up to 46 Atari games simultaneously at close-to-human performance.
We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning.
We find that our Multi-Game Decision Transformer models offer the best scalability and performance.
arXiv Detail & Related papers (2022-05-30T16:55:38Z) - Illuminating the Space of Dungeon Maps, Locked-door Missions and Enemy
Placement Through MAP-Elites [0.0]
This paper introduces an extended version of an evolutionary dungeon generator by incorporating a MAP-Elites population.
Our dungeon levels are discretized with rooms that may have locked-door missions and enemies within them.
We encoded the dungeons through a tree structure to ensure the feasibility of missions.
arXiv Detail & Related papers (2022-02-18T17:06:04Z) - AniFormer: Data-driven 3D Animation with Transformer [95.45760189583181]
We present a novel task, i.e., animating a target 3D object through the motion of a raw driving sequence.
AniFormer generates animated 3D sequences by directly taking the raw driving sequences and arbitrary same-type target meshes as inputs.
Our AniFormer achieves high-fidelity, realistic, temporally coherent animated results and outperforms compared start-of-the-art methods on benchmarks of diverse categories.
arXiv Detail & Related papers (2021-10-20T12:36:55Z) - Regularizing Transformers With Deep Probabilistic Layers [62.997667081978825]
In this work, we demonstrate how the inclusion of deep generative models within BERT can bring more versatile models.
We prove its effectiveness not only in Transformers but also in the most relevant encoder-decoder based LM, seq2seq with and without attention.
arXiv Detail & Related papers (2021-08-23T10:17:02Z) - Generating and Blending Game Levels via Quality-Diversity in the Latent
Space of a Variational Autoencoder [7.919213739992465]
We present a level generation and game blending approach that combines the use of VAEs and QD algorithms.
Specifically, we train VAEs on game levels and then run the MAP-Elites QD algorithm using the learned latent space of the VAE as the search space.
arXiv Detail & Related papers (2021-02-24T18:44:23Z) - Reducing the Annotation Effort for Video Object Segmentation Datasets [50.893073670389164]
densely labeling every frame with pixel masks does not scale to large datasets.
We use a deep convolutional network to automatically create pseudo-labels on a pixel level from much cheaper bounding box annotations.
We obtain the new TAO-VOS benchmark, which we make publicly available at www.vision.rwth-aachen.de/page/taovos.
arXiv Detail & Related papers (2020-11-02T17:34:45Z) - Conditional Level Generation and Game Blending [6.217860411034386]
We build on prior research by exploring the level design affordances and applications enabled by conditional VAEs (CVAEs)
CVAEs augment VAEs by allowing them to be trained using labeled data, thus enabling outputs to be generated conditioned on some input.
We show that such models can assist in level design by generating levels with desired level elements and patterns as well as producing blended levels with desired combinations of games.
arXiv Detail & Related papers (2020-10-13T00:28:20Z) - Game Level Clustering and Generation using Gaussian Mixture VAEs [6.217860411034386]
Variational autoencoders (VAEs) have been shown to be able to generate game levels but require manual exploration of the learned latent space to generate outputs with desired attributes.
In this paper, we apply a variant of the VAE which imposes a mixture of Gaussians (GM) on the latent space, unlike regular VAEs which impose a unimodal Gaussian.
This allows GMVAEs to cluster levels in an unsupervised manner using the components of the GM and then generate new levels using the learned components.
arXiv Detail & Related papers (2020-08-22T15:07:30Z) - Controllable Level Blending between Games using Variational Autoencoders [6.217860411034386]
We train a VAE on level data from Super Mario Bros. and Kid Icarus, enabling it to capture the latent space spanning both games.
We then use this space to generate level segments that combine properties of levels from both games.
We argue that these affordances make the VAE-based approach especially suitable for co-creative level design.
arXiv Detail & Related papers (2020-02-27T01:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.