Level Generation with Constrained Expressive Range
- URL: http://arxiv.org/abs/2504.05334v1
- Date: Fri, 04 Apr 2025 20:55:30 GMT
- Title: Level Generation with Constrained Expressive Range
- Authors: Mahsa Bazzaz, Seth Cooper,
- Abstract summary: Expressive range analysis is a visualization-based technique used to evaluate the performance of generative models.<n>In this work, we use the expressive range of a generator as the conceptual space of possible creations.<n>To do so, we use a constraint-based generator that systematically traverses and generates levels in this space.
- Score: 3.2228025627337864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Expressive range analysis is a visualization-based technique used to evaluate the performance of generative models, particularly in game level generation. It typically employs two quantifiable metrics to position generated artifacts on a 2D plot, offering insight into how content is distributed within a defined metric space. In this work, we use the expressive range of a generator as the conceptual space of possible creations. Inspired by the quality diversity paradigm, we explore this space to generate levels. To do so, we use a constraint-based generator that systematically traverses and generates levels in this space. To train the constraint-based generator we use different tile patterns to learn from the initial example levels. We analyze how different patterns influence the exploration of the expressive range. Specifically, we compare the exploration process based on time, the number of successful and failed sample generations, and the overall interestingness of the generated levels. Unlike typical quality diversity approaches that rely on random generation and hope to get good coverage of the expressive range, this approach systematically traverses the grid ensuring more coverage. This helps create unique and interesting game levels while also improving our understanding of the generator's strengths and limitations.
Related papers
- WeGen: A Unified Model for Interactive Multimodal Generation as We Chat [51.78489661490396]
We introduce WeGen, a model that unifies multimodal generation and understanding.<n>It can generate diverse results with high creativity for less detailed instructions.<n>We show it achieves state-of-the-art performance across various visual generation benchmarks.
arXiv Detail & Related papers (2025-03-03T02:50:07Z) - Exploring Minecraft Settlement Generators with Generative Shift Analysis [1.591012510488751]
We introduce a novel method for evaluating the impact of individual stages in a PCG pipeline by quantifying the impact that a generative process has when it is applied to a pre-existing artifact.
We explore this technique by applying it to a very rich dataset of Minecraft game maps produced by a set of alternative settlement generators developed as part of the Generative Design in Minecraft Competition (GDMC)
While this is an early exploration of this technique we find it to be a promising lens to apply to PCG evaluation, and we are optimistic about the potential of Generative Shift to be a domain-agnostic method for evaluating
arXiv Detail & Related papers (2023-09-11T10:48:42Z) - Contrastive Learning for Diverse Disentangled Foreground Generation [67.81298739373766]
We introduce a new method for diverse foreground generation with explicit control over various factors.
We leverage contrastive learning with latent codes to generate diverse foreground results for the same masked input.
Experiments demonstrate the superiority of our method over state-of-the-arts in result diversity and generation controllability.
arXiv Detail & Related papers (2022-11-04T18:51:04Z) - Local and Global GANs with Semantic-Aware Upsampling for Image
Generation [201.39323496042527]
We consider generating images using local context.
We propose a class-specific generative network using semantic maps as guidance.
Lastly, we propose a novel semantic-aware upsampling method.
arXiv Detail & Related papers (2022-02-28T19:24:25Z) - Multi-level Latent Space Structuring for Generative Control [53.240701050423155]
We propose to leverage the StyleGAN generative architecture to devise a new truncation technique.
We do so by learning to re-generate W-space, the extended intermediate latent space of StyleGAN, using a learnable mixture of Gaussians.
The resulting truncation scheme is more faithful to the original untruncated samples and allows a better trade-off between quality and diversity.
arXiv Detail & Related papers (2022-02-11T21:26:17Z) - Illuminating Diverse Neural Cellular Automata for Level Generation [5.294599496581041]
We present a method of generating a collection of neural cellular automata (NCA) to design video game levels.
Our approach can train diverse level generators, whose output levels vary based on aesthetic or functional criteria.
We apply our new method to generate level generators for several 2D tile-based games: a maze game, Sokoban, and Zelda.
arXiv Detail & Related papers (2021-09-12T11:17:31Z) - Toward Spatially Unbiased Generative Models [19.269719158344508]
Recent image generation models show remarkable generation performance.
However, they mirror strong location preference in datasets, which we call spatial bias.
We argue that the generators rely on their implicit positional encoding to render spatial content.
arXiv Detail & Related papers (2021-08-03T04:13:03Z) - Learning Controllable Content Generators [5.5805433423452895]
We train generators capable of producing controllably diverse output, by making them "goal-aware"
We show that the resulting level generators are capable of exploring the space of possible levels in a targeted, controllable manner.
arXiv Detail & Related papers (2021-05-06T22:15:51Z) - Level Generation for Angry Birds with Sequential VAE and Latent Variable
Evolution [25.262831218008202]
We develop a deep-generative-model-based level generation for the game domain of Angry Birds.
Experiments show that the proposed level generator drastically improves the stability and diversity of generated levels.
arXiv Detail & Related papers (2021-04-13T11:23:39Z) - Slimmable Generative Adversarial Networks [54.61774365777226]
Generative adversarial networks (GANs) have achieved remarkable progress in recent years, but the continuously growing scale of models makes them challenging to deploy widely in practical applications.
In this paper, we introduce slimmable GANs, which can flexibly switch the width of the generator to accommodate various quality-efficiency trade-offs at runtime.
arXiv Detail & Related papers (2020-12-10T13:35:22Z) - Local Class-Specific and Global Image-Level Generative Adversarial
Networks for Semantic-Guided Scene Generation [135.4660201856059]
We consider learning the scene generation in a local context, and design a local class-specific generative network with semantic maps as a guidance.
To learn more discrimi class-specific feature representations for the local generation, a novel classification module is also proposed.
Experiments on two scene image generation tasks show superior generation performance of the proposed model.
arXiv Detail & Related papers (2019-12-27T16:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.