Level generation and style enhancement -- deep learning for game
development overview
- URL: http://arxiv.org/abs/2107.07397v1
- Date: Thu, 15 Jul 2021 15:24:43 GMT
- Title: Level generation and style enhancement -- deep learning for game
development overview
- Authors: Piotr Migda{\l}, Bart{\l}omiej Olechno, B{\l}a\.zej Podg\'orski
- Abstract summary: We present seven approaches to create level maps, each using statistical methods, machine learning, or deep learning.
We aim to present new possibilities for game developers and level artists.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present practical approaches of using deep learning to create and enhance
level maps and textures for video games -- desktop, mobile, and web. We aim to
present new possibilities for game developers and level artists. The task of
designing levels and filling them with details is challenging. It is both
time-consuming and takes effort to make levels rich, complex, and with a
feeling of being natural. Fortunately, recent progress in deep learning
provides new tools to accompany level designers and visual artists. Moreover,
they offer a way to generate infinite worlds for game replayability and adjust
educational games to players' needs. We present seven approaches to create
level maps, each using statistical methods, machine learning, or deep learning.
In particular, we include:
- Generative Adversarial Networks for creating new images from existing
examples (e.g. ProGAN).
- Super-resolution techniques for upscaling images while preserving crisp
detail (e.g. ESRGAN).
- Neural style transfer for changing visual themes.
- Image translation - turning semantic maps into images (e.g. GauGAN).
- Semantic segmentation for turning images into semantic masks (e.g. U-Net).
- Unsupervised semantic segmentation for extracting semantic features (e.g.
Tile2Vec).
- Texture synthesis - creating large patterns based on a smaller sample (e.g.
InGAN).
Related papers
- PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions [66.92809850624118]
PixWizard is an image-to-image visual assistant designed for image generation, manipulation, and translation based on free-from language instructions.
We tackle a variety of vision tasks into a unified image-text-to-image generation framework and curate an Omni Pixel-to-Pixel Instruction-Tuning dataset.
Our experiments demonstrate that PixWizard not only shows impressive generative and understanding abilities for images with diverse resolutions but also exhibits promising generalization capabilities with unseen tasks and human instructions.
arXiv Detail & Related papers (2024-09-23T17:59:46Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - Stroke-based Rendering: From Heuristics to Deep Learning [0.17188280334580194]
Recent developments in deep learning methods help to bridge the gap between stroke-based paintings and pixel photo generation.
We aim to provide a structured introduction and understanding of common challenges and approaches in stroke-based rendering algorithms.
arXiv Detail & Related papers (2022-12-30T05:34:54Z) - Looks Like Magic: Transfer Learning in GANs to Generate New Card
Illustrations [5.006086647446482]
We introduce a novel dataset, named MTG, with thousands of illustration from diverse card types and rich in metadata.
We show that simpler models, such as DCGANs, are not able to learn to generate proper illustrations in any setting.
We perform experiments to understand how well pre-trained features from StyleGan2 can be transferred towards the target domain.
arXiv Detail & Related papers (2022-05-28T14:02:09Z) - A Shared Representation for Photorealistic Driving Simulators [83.5985178314263]
We propose to improve the quality of generated images by rethinking the discriminator architecture.
The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses.
We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.
arXiv Detail & Related papers (2021-12-09T18:59:21Z) - Fully Context-Aware Image Inpainting with a Learned Semantic Pyramid [102.24539566851809]
Restoring reasonable and realistic content for arbitrary missing regions in images is an important yet challenging task.
Recent image inpainting models have made significant progress in generating vivid visual details, but they can still lead to texture blurring or structural distortions.
We propose the Semantic Pyramid Network (SPN) motivated by the idea that learning multi-scale semantic priors can greatly benefit the recovery of locally missing content in images.
arXiv Detail & Related papers (2021-12-08T04:33:33Z) - Tile Embedding: A General Representation for Procedural Level Generation
via Machine Learning [1.590611306750623]
We present tile embeddings, a unified, affordance-rich representation for tile-based 2D games.
We employ autoencoders trained on the visual and semantic information of tiles from a set of existing, human-annotated games.
We evaluate this representation on its ability to predict affordances for unseen tiles, and to serve as a PLGML representation for annotated and unannotated games.
arXiv Detail & Related papers (2021-10-07T04:48:48Z) - Context-Aware Image Inpainting with Learned Semantic Priors [100.99543516733341]
We introduce pretext tasks that are semantically meaningful to estimating the missing contents.
We propose a context-aware image inpainting model, which adaptively integrates global semantics and local features.
arXiv Detail & Related papers (2021-06-14T08:09:43Z) - MarioNette: Self-Supervised Sprite Learning [67.51317291061115]
We propose a deep learning approach for obtaining a graphically disentangled representation of recurring elements.
By jointly learning a dictionary of texture patches and training a network that places them onto a canvas, we effectively deconstruct sprite-based content into a sparse, consistent, and interpretable representation.
arXiv Detail & Related papers (2021-04-29T17:59:01Z) - Learning to Generate Levels From Nothing [5.2508303190856624]
We propose Generative Playing Networks which design levels for itself to play.
The algorithm is built in two parts; an agent that learns to play game levels, and a generator that learns the distribution of playable levels.
We demonstrate the capability of this framework by training an agent and level generator for a 2D dungeon crawler game.
arXiv Detail & Related papers (2020-02-12T22:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.