Reconstructing Existing Levels through Level Inpainting
- URL: http://arxiv.org/abs/2309.09472v3
- Date: Thu, 5 Oct 2023 01:45:37 GMT
- Title: Reconstructing Existing Levels through Level Inpainting
- Authors: Johor Jara Gonzalez, Matthew Guzdial
- Abstract summary: This paper introduces Content Augmentation and focuses on the subproblem of level inpainting.
We present two approaches for level inpainting: an Autoencoder and a U-net.
- Score: 3.1788052710897707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Procedural Content Generation (PCG) and Procedural Content Generation via
Machine Learning (PCGML) have been used in prior work for generating levels in
various games. This paper introduces Content Augmentation and focuses on the
subproblem of level inpainting, which involves reconstructing and extending
video game levels. Drawing inspiration from image inpainting, we adapt two
techniques from this domain to address our specific use case. We present two
approaches for level inpainting: an Autoencoder and a U-net. Through a
comprehensive case study, we demonstrate their superior performance compared to
a baseline method and discuss their relative merits. Furthermore, we provide a
practical demonstration of both approaches for the level inpainting task and
offer insights into potential directions for future research.
Related papers
- BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed
Dual-Branch Diffusion [61.90969199199739]
BrushNet is a novel plug-and-play dual-branch model engineered to embed pixel-level masked image features into any pre-trained DM.
BrushNet's superior performance over existing models across seven key metrics, including image quality, mask region preservation, and textual coherence.
arXiv Detail & Related papers (2024-03-11T17:59:31Z) - Deep Learning-based Image and Video Inpainting: A Survey [47.53641171826598]
This paper comprehensively reviews the deep learning-based methods for image and video inpainting.
We sort existing methods into different categories from the perspective of their high-level inpainting pipeline.
We present evaluation metrics for low-level pixel and high-level perceptional similarity, conduct a performance evaluation, and discuss the strengths and weaknesses of representative inpainting methods.
arXiv Detail & Related papers (2024-01-07T05:50:12Z) - Segmentation-Based Parametric Painting [22.967620358813214]
We introduce a novel image-to-painting method that facilitates the creation of large-scale, high-fidelity paintings with human-like quality and stylistic variation.
We introduce a segmentation-based painting process and a dynamic attention map approach inspired by human painting strategies.
Our optimized batch processing and patch-based loss framework enable efficient handling of large canvases.
arXiv Detail & Related papers (2023-11-24T04:15:10Z) - Interactive Neural Painting [66.9376011879115]
This paper proposes the first approach for Interactive Neural Painting (NP)
We propose I-Paint, a novel method based on a conditional transformer Variational AutoEncoder (VAE) architecture with a two-stage decoder.
Our experiments show that our approach provides good stroke suggestions and compares favorably to the state of the art.
arXiv Detail & Related papers (2023-07-31T07:02:00Z) - Deep Image Matting: A Comprehensive Survey [85.77905619102802]
This paper presents a review of recent advancements in image matting in the era of deep learning.
We focus on two fundamental sub-tasks: auxiliary input-based image matting and automatic image matting.
We discuss relevant applications of image matting and highlight existing challenges and potential opportunities for future research.
arXiv Detail & Related papers (2023-04-10T15:48:55Z) - Modeling Image Composition for Complex Scene Generation [77.10533862854706]
We present a method that achieves state-of-the-art results on layout-to-image generation tasks.
After compressing RGB images into patch tokens, we propose the Transformer with Focal Attention (TwFA) for exploring dependencies of object-to-object, object-to-patch and patch-to-patch.
arXiv Detail & Related papers (2022-06-02T08:34:25Z) - Combining Semantic Guidance and Deep Reinforcement Learning For
Generating Human Level Paintings [22.889059874754242]
Generation of stroke-based non-photorealistic imagery is an important problem in the computer vision community.
Previous methods have been limited to datasets with little variation in position, scale and saliency of the foreground object.
We propose a Semantic Guidance pipeline with 1) a bi-level painting procedure for learning the distinction between foreground and background brush strokes at training time.
arXiv Detail & Related papers (2020-11-25T09:00:04Z) - SketchyCOCO: Image Generation from Freehand Scene Sketches [71.85577739612579]
We introduce the first method for automatic image generation from scene-level freehand sketches.
Key contribution is an attribute vector bridged Geneversarative Adrial Network called EdgeGAN.
We have built a large-scale composite dataset called SketchyCOCO to support and evaluate the solution.
arXiv Detail & Related papers (2020-03-05T14:54:10Z) - PCGRL: Procedural Content Generation via Reinforcement Learning [6.32656340734423]
We investigate how reinforcement learning can be used to train level-designing agents in games.
By seeing the design problem as a sequential task, we can use reinforcement learning to learn how to take the next action.
This approach can be used when few or no examples exist to train from, and the trained generator is very fast.
arXiv Detail & Related papers (2020-01-24T22:09:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.