Minecraft-ify: Minecraft Style Image Generation with Text-guided Image
Editing for In-Game Application
- URL: http://arxiv.org/abs/2402.05448v2
- Date: Sun, 3 Mar 2024 10:02:54 GMT
- Title: Minecraft-ify: Minecraft Style Image Generation with Text-guided Image
Editing for In-Game Application
- Authors: Bumsoo Kim, Sanghyun Byun, Yonghoon Jung, Wonseop Shin, Sareer UI
Amin, Sanghyun Seo
- Abstract summary: Ours can generate face-focused image for texture mapping tailored to 3D virtual character having cube manifold.
It can be manipulated with text-guidance using StyleGAN and StyleCLIP.
- Score: 5.431779602239565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we first present the character texture generation system
\textit{Minecraft-ify}, specified to Minecraft video game toward in-game
application. Ours can generate face-focused image for texture mapping tailored
to 3D virtual character having cube manifold. While existing projects or works
only generate texture, proposed system can inverse the user-provided real
image, or generate average/random appearance from learned distribution.
Moreover, it can be manipulated with text-guidance using StyleGAN and
StyleCLIP. These features provide a more extended user experience with enlarged
freedom as a user-friendly AI-tool. Project page can be found at
https://gh-bumsookim.github.io/Minecraft-ify/
Related papers
- Word2Minecraft: Generating 3D Game Levels through Large Language Models [6.037493811943889]
We present Word2Minecraft, a system that generates playable game levels in Minecraft based on structured stories.
We introduce a flexible framework that allows for the customization of story complexity, enabling dynamic level generation.
We show that GPT-4-Turbo outperforms GPT-4o-Mini in most areas, including story coherence and objective enjoyment.
arXiv Detail & Related papers (2025-03-18T18:38:38Z) - SceneCraft: Layout-Guided 3D Scene Generation [29.713491313796084]
SceneCraft is a novel method for generating detailed indoor scenes that adhere to textual descriptions and spatial layout preferences.
Our method significantly outperforms existing approaches in complex indoor scene generation with diverse textures, consistent geometry, and realistic visual quality.
arXiv Detail & Related papers (2024-10-11T17:59:58Z) - DreamMesh: Jointly Manipulating and Texturing Triangle Meshes for Text-to-3D Generation [149.77077125310805]
We present DreamMesh, a novel text-to-3D architecture that pivots on well-defined surfaces (triangle meshes) to generate high-fidelity explicit 3D model.
In the coarse stage, the mesh is first deformed by text-guided Jacobians and then DreamMesh textures the mesh with an interlaced use of 2D diffusion models.
In the fine stage, DreamMesh jointly manipulates the mesh and refines the texture map, leading to high-quality triangle meshes with high-fidelity textured materials.
arXiv Detail & Related papers (2024-09-11T17:59:02Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - DreamCraft: Text-Guided Generation of Functional 3D Environments in Minecraft [19.9639990460142]
We present a method for generating functional 3D artifacts from free-form text prompts in the open-world game Minecraft.
Our method, DreamCraft, trains quantized Neural Radiance Fields (NeRFs) to represent artifacts that, when viewed in-game, match given text descriptions.
We show how this can be leveraged to generate 3D structures that match a target distribution or obey certain adjacency rules over the block types.
arXiv Detail & Related papers (2024-04-23T21:57:14Z) - DragTex: Generative Point-Based Texture Editing on 3D Mesh [11.163205302136625]
We propose a generative point-based 3D mesh texture editing method called DragTex.
This method utilizes a diffusion model to blend locally inconsistent textures in the region near the deformed silhouette between different views.
We train LoRA using multi-view images instead of training each view individually, which significantly shortens the training time.
arXiv Detail & Related papers (2024-03-04T17:05:01Z) - Towards High-Fidelity Text-Guided 3D Face Generation and Manipulation
Using only Images [105.92311979305065]
TG-3DFace creates more realistic and aesthetically pleasing 3D faces, boosting 9% multi-view consistency (MVIC) over Latent3D.
The rendered face images generated by TG-3DFace achieve higher FID and CLIP score than text-to-2D face/image generation models.
arXiv Detail & Related papers (2023-08-31T14:26:33Z) - TADA! Text to Animatable Digital Avatars [57.52707683788961]
TADA takes textual descriptions and produces expressive 3D avatars with high-quality geometry and lifelike textures.
We derive an optimizable high-resolution body model from SMPL-X with 3D displacements and a texture map.
We render normals and RGB images of the generated character and exploit their latent embeddings in the SDS training process.
arXiv Detail & Related papers (2023-08-21T17:59:10Z) - TextMesh: Generation of Realistic 3D Meshes From Text Prompts [56.2832907275291]
We propose a novel method for generation of highly realistic-looking 3D meshes.
To this end, we extend NeRF to employ an SDF backbone, leading to improved 3D mesh extraction.
arXiv Detail & Related papers (2023-04-24T20:29:41Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - World-GAN: a Generative Model for Minecraft Worlds [27.221938979891384]
This work introduces World-GAN, the first method to perform data-driven Procedural Content Generation via Machine Learning in Minecraft.
Based on a 3D Generative Adversarial Network (GAN) architecture, we are able to create arbitrarily sized world snippets from a given sample.
arXiv Detail & Related papers (2021-06-18T14:45:39Z) - MeInGame: Create a Game Character Face from a Single Portrait [15.432712351907012]
We propose an automatic character face creation method that predicts both facial shape and texture from a single portrait.
Experiments show that our method outperforms state-of-the-art methods used in games.
arXiv Detail & Related papers (2021-02-04T02:12:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.