Conditional Level Generation and Game Blending
- URL: http://arxiv.org/abs/2010.07735v1
- Date: Tue, 13 Oct 2020 00:28:20 GMT
- Title: Conditional Level Generation and Game Blending
- Authors: Anurag Sarkar, Zhihan Yang, Seth Cooper
- Abstract summary: We build on prior research by exploring the level design affordances and applications enabled by conditional VAEs (CVAEs)
CVAEs augment VAEs by allowing them to be trained using labeled data, thus enabling outputs to be generated conditioned on some input.
We show that such models can assist in level design by generating levels with desired level elements and patterns as well as producing blended levels with desired combinations of games.
- Score: 6.217860411034386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior research has shown variational autoencoders (VAEs) to be useful for
generating and blending game levels by learning latent representations of
existing level data. We build on such models by exploring the level design
affordances and applications enabled by conditional VAEs (CVAEs). CVAEs augment
VAEs by allowing them to be trained using labeled data, thus enabling outputs
to be generated conditioned on some input. We studied how increased control in
the level generation process and the ability to produce desired outputs via
training on labeled game level data could build on prior PCGML methods. Through
our results of training CVAEs on levels from Super Mario Bros., Kid Icarus and
Mega Man, we show that such models can assist in level design by generating
levels with desired level elements and patterns as well as producing blended
levels with desired combinations of games.
Related papers
- Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - Octopus: Embodied Vision-Language Programmer from Environmental Feedback [58.04529328728999]
Embodied vision-language models (VLMs) have achieved substantial progress in multimodal perception and reasoning.
To bridge this gap, we introduce Octopus, an embodied vision-language programmer that uses executable code generation as a medium to connect planning and manipulation.
Octopus is designed to 1) proficiently comprehend an agent's visual and textual task objectives, 2) formulate intricate action sequences, and 3) generate executable code.
arXiv Detail & Related papers (2023-10-12T17:59:58Z) - Level Generation Through Large Language Models [3.620115940532283]
Large Language Models (LLMs) are powerful tools capable of leveraging their training on natural language to write stories, generate code, and answer questions.
But can they generate functional video game levels?
Game levels, with their complex functional constraints and spatial relationships in more than one dimension, are very different from the kinds of data an LLM typically sees during training.
We investigate the use of LLMs to generate levels for the game Sokoban, finding that LLMs are indeed capable of doing so, and that their performance scales dramatically with dataset size.
arXiv Detail & Related papers (2023-02-11T23:34:42Z) - MLP Architectures for Vision-and-Language Modeling: An Empirical Study [91.6393550858739]
We initiate the first empirical study on the use of architectures for vision-and-featured (VL) fusion.
We find that without pre-training, usings for multimodal fusion has a noticeable performance gap compared to transformers.
Instead of heavy multi-head attention, adding tiny one-head attention to encoders is sufficient to achieve comparable performance to transformers.
arXiv Detail & Related papers (2021-12-08T18:26:19Z) - Controllable Data Augmentation Through Deep Relighting [75.96144853354362]
We explore how to augment a varied set of image datasets through relighting so as to improve the ability of existing models to be invariant to illumination changes.
We develop a tool, based on an encoder-decoder network, that is able to quickly generate multiple variations of the illumination of various input scenes.
We demonstrate that by training models on datasets that have been augmented with our pipeline, it is possible to achieve higher performance on localization benchmarks.
arXiv Detail & Related papers (2021-10-26T20:02:51Z) - Regularizing Transformers With Deep Probabilistic Layers [62.997667081978825]
In this work, we demonstrate how the inclusion of deep generative models within BERT can bring more versatile models.
We prove its effectiveness not only in Transformers but also in the most relevant encoder-decoder based LM, seq2seq with and without attention.
arXiv Detail & Related papers (2021-08-23T10:17:02Z) - Dungeon and Platformer Level Blending and Generation using Conditional
VAEs [7.919213739992465]
conditional VAEs (CVAEs) were recently shown capable of generating output that can be modified using labels specifying desired content.
We expand these works by using CVAEs for generating whole platformer and dungeon levels, and blending levels across these genres.
arXiv Detail & Related papers (2021-06-17T05:46:03Z) - Level Generation for Angry Birds with Sequential VAE and Latent Variable
Evolution [25.262831218008202]
We develop a deep-generative-model-based level generation for the game domain of Angry Birds.
Experiments show that the proposed level generator drastically improves the stability and diversity of generated levels.
arXiv Detail & Related papers (2021-04-13T11:23:39Z) - Mixup-Transformer: Dynamic Data Augmentation for NLP Tasks [75.69896269357005]
Mixup is the latest data augmentation technique that linearly interpolates input examples and the corresponding labels.
In this paper, we explore how to apply mixup to natural language processing tasks.
We incorporate mixup to transformer-based pre-trained architecture, named "mixup-transformer", for a wide range of NLP tasks.
arXiv Detail & Related papers (2020-10-05T23:37:30Z) - Game Level Clustering and Generation using Gaussian Mixture VAEs [6.217860411034386]
Variational autoencoders (VAEs) have been shown to be able to generate game levels but require manual exploration of the learned latent space to generate outputs with desired attributes.
In this paper, we apply a variant of the VAE which imposes a mixture of Gaussians (GM) on the latent space, unlike regular VAEs which impose a unimodal Gaussian.
This allows GMVAEs to cluster levels in an unsupervised manner using the components of the GM and then generate new levels using the learned components.
arXiv Detail & Related papers (2020-08-22T15:07:30Z) - Controllable Level Blending between Games using Variational Autoencoders [6.217860411034386]
We train a VAE on level data from Super Mario Bros. and Kid Icarus, enabling it to capture the latent space spanning both games.
We then use this space to generate level segments that combine properties of levels from both games.
We argue that these affordances make the VAE-based approach especially suitable for co-creative level design.
arXiv Detail & Related papers (2020-02-27T01:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.