World-GAN: a Generative Model for Minecraft Worlds
- URL: http://arxiv.org/abs/2106.10155v1
- Date: Fri, 18 Jun 2021 14:45:39 GMT
- Title: World-GAN: a Generative Model for Minecraft Worlds
- Authors: Maren Awiszus, Frederik Schubert, Bodo Rosenhahn
- Abstract summary: This work introduces World-GAN, the first method to perform data-driven Procedural Content Generation via Machine Learning in Minecraft.
Based on a 3D Generative Adversarial Network (GAN) architecture, we are able to create arbitrarily sized world snippets from a given sample.
- Score: 27.221938979891384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work introduces World-GAN, the first method to perform data-driven
Procedural Content Generation via Machine Learning in Minecraft from a single
example. Based on a 3D Generative Adversarial Network (GAN) architecture, we
are able to create arbitrarily sized world snippets from a given sample. We
evaluate our approach on creations from the community as well as structures
generated with the Minecraft World Generator. Our method is motivated by the
dense representations used in Natural Language Processing (NLP) introduced with
word2vec [1]. The proposed block2vec representations make World-GAN independent
from the number of different blocks, which can vary a lot in Minecraft, and
enable the generation of larger levels. Finally, we demonstrate that changing
this new representation space allows us to change the generated style of an
already trained generator. World-GAN enables its users to generate Minecraft
worlds based on parts of their creations.
Related papers
- SynCity: Training-Free Generation of 3D Worlds [107.69875149880679]
We propose SynCity, a training- and optimization-free approach to generating 3D worlds from textual descriptions.
We show how 3D and 2D generators can be combined to generate ever-expanding scenes.
arXiv Detail & Related papers (2025-03-20T17:59:40Z) - Word2Minecraft: Generating 3D Game Levels through Large Language Models [6.037493811943889]
We present Word2Minecraft, a system that generates playable game levels in Minecraft based on structured stories.
We introduce a flexible framework that allows for the customization of story complexity, enabling dynamic level generation.
We show that GPT-4-Turbo outperforms GPT-4o-Mini in most areas, including story coherence and objective enjoyment.
arXiv Detail & Related papers (2025-03-18T18:38:38Z) - 3D Building Generation in Minecraft via Large Language Models [1.9670700129679104]
This paper explores how large language models (LLMs) contribute to the generation of 3D buildings in a sandbox game, Minecraft.
We propose a Text to Building in Minecraft (T2BM) model, which involves refining prompts, decoding interlayer representation and repairing.
arXiv Detail & Related papers (2024-06-13T02:21:07Z) - DreamCraft: Text-Guided Generation of Functional 3D Environments in Minecraft [19.9639990460142]
We present a method for generating functional 3D artifacts from free-form text prompts in the open-world game Minecraft.
Our method, DreamCraft, trains quantized Neural Radiance Fields (NeRFs) to represent artifacts that, when viewed in-game, match given text descriptions.
We show how this can be leveraged to generate 3D structures that match a target distribution or obey certain adjacency rules over the block types.
arXiv Detail & Related papers (2024-04-23T21:57:14Z) - T-Pixel2Mesh: Combining Global and Local Transformer for 3D Mesh Generation from a Single Image [84.08705684778666]
We propose a novel Transformer-boosted architecture, named T-Pixel2Mesh, inspired by the coarse-to-fine approach of P2M.
Specifically, we use a global Transformer to control the holistic shape and a local Transformer to refine the local geometry details.
Our experiments on ShapeNet demonstrate state-of-the-art performance, while results on real-world data show the generalization capability.
arXiv Detail & Related papers (2024-03-20T15:14:22Z) - Minecraft-ify: Minecraft Style Image Generation with Text-guided Image
Editing for In-Game Application [5.431779602239565]
Ours can generate face-focused image for texture mapping tailored to 3D virtual character having cube manifold.
It can be manipulated with text-guidance using StyleGAN and StyleCLIP.
arXiv Detail & Related papers (2024-02-08T07:01:00Z) - GETAvatar: Generative Textured Meshes for Animatable Human Avatars [69.56959932421057]
We study the problem of 3D-aware full-body human generation, aiming at creating animatable human avatars with high-quality geometries and textures.
We propose GETAvatar, a Generative model that directly generates Explicit Textured 3D rendering for animatable human Avatar.
arXiv Detail & Related papers (2023-10-04T10:30:24Z) - MineDojo: Building Open-Ended Embodied Agents with Internet-Scale
Knowledge [70.47759528596711]
We introduce MineDojo, a new framework built on the popular Minecraft game.
We propose a novel agent learning algorithm that leverages large pre-trained video-language models as a learned reward function.
Our agent is able to solve a variety of open-ended tasks specified in free-form language without any manually designed dense shaping reward.
arXiv Detail & Related papers (2022-06-17T15:53:05Z) - Local and Global GANs with Semantic-Aware Upsampling for Image
Generation [201.39323496042527]
We consider generating images using local context.
We propose a class-specific generative network using semantic maps as guidance.
Lastly, we propose a novel semantic-aware upsampling method.
arXiv Detail & Related papers (2022-02-28T19:24:25Z) - GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds [29.533111314655788]
We present GANcraft, an unsupervised neural rendering framework for generating photorealistic images of large 3D block worlds.
Our method takes a semantic block world as input, where each block is assigned a semantic label such as dirt, grass, or water.
In the absence of paired ground truth real images for the block world, we devise a training technique based on pseudo-ground truth and adversarial training.
arXiv Detail & Related papers (2021-04-15T17:59:38Z) - EvoCraft: A New Challenge for Open-Endedness [7.927206441149002]
EvoCraft is a framework for Minecraft designed to study open-ended algorithms.
EvoCraft offers a challenging new environment for automated search methods (such as evolution) to find complex artifacts.
arXiv Detail & Related papers (2020-12-08T21:36:18Z) - Local Class-Specific and Global Image-Level Generative Adversarial
Networks for Semantic-Guided Scene Generation [135.4660201856059]
We consider learning the scene generation in a local context, and design a local class-specific generative network with semantic maps as a guidance.
To learn more discrimi class-specific feature representations for the local generation, a novel classification module is also proposed.
Experiments on two scene image generation tasks show superior generation performance of the proposed model.
arXiv Detail & Related papers (2019-12-27T16:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.