Text2Tex: Text-driven Texture Synthesis via Diffusion Models
- URL: http://arxiv.org/abs/2303.11396v1
- Date: Mon, 20 Mar 2023 19:02:13 GMT
- Title: Text2Tex: Text-driven Texture Synthesis via Diffusion Models
- Authors: Dave Zhenyu Chen, Yawar Siddiqui, Hsin-Ying Lee, Sergey Tulyakov,
Matthias Nie{\ss}ner
- Abstract summary: We present Text2Tex, a novel method for generating high-quality textures for 3D meshes from text prompts.
Our method incorporates inpainting into a pre-trained depth-aware image diffusion model to progressively synthesize high resolution partial textures from multiple viewpoints.
- Score: 31.773823357617093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Text2Tex, a novel method for generating high-quality textures for
3D meshes from the given text prompts. Our method incorporates inpainting into
a pre-trained depth-aware image diffusion model to progressively synthesize
high resolution partial textures from multiple viewpoints. To avoid
accumulating inconsistent and stretched artifacts across views, we dynamically
segment the rendered view into a generation mask, which represents the
generation status of each visible texel. This partitioned view representation
guides the depth-aware inpainting model to generate and update partial textures
for the corresponding regions. Furthermore, we propose an automatic view
sequence generation scheme to determine the next best view for updating the
partial texture. Extensive experiments demonstrate that our method
significantly outperforms the existing text-driven approaches and GAN-based
methods.
Related papers
- TexPro: Text-guided PBR Texturing with Procedural Material Modeling [23.8905505397344]
TexPro is a novel method for high-fidelity material generation for input 3D meshes given text prompts.
We first generate multi-view reference images given the input textual prompt by employing the latest text-to-image model.
We derive texture maps through a rendering-based optimization with recent differentiable procedural materials.
arXiv Detail & Related papers (2024-10-21T11:10:07Z) - RoCoTex: A Robust Method for Consistent Texture Synthesis with Diffusion Models [3.714901836138171]
We propose a robust text-to-texture method for generating consistent and seamless textures that are well aligned with the mesh.
Our method leverages state-of-the-art 2D diffusion models, including SDXL and multiple ControlNets, to capture structural features and intricate details in the generated textures.
arXiv Detail & Related papers (2024-09-30T06:29:50Z) - TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and Resampling [37.67373829836975]
We present TexGen, a novel multi-view sampling and resampling framework for texture generation.
Our proposed method produces significantly better texture quality for diverse 3D objects with a high degree of view consistency.
Our proposed texture generation technique can also be applied to texture editing while preserving the original identity.
arXiv Detail & Related papers (2024-08-02T14:24:40Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - GenesisTex: Adapting Image Denoising Diffusion to Texture Space [15.907134430301133]
GenesisTex is a novel method for synthesizing textures for 3D geometries from text descriptions.
We maintain a latent texture map for each viewpoint, which is updated with predicted noise on the rendering of the corresponding viewpoint.
Global consistency is achieved through the integration of style consistency mechanisms within the noise prediction network.
arXiv Detail & Related papers (2024-03-26T15:15:15Z) - TexRO: Generating Delicate Textures of 3D Models by Recursive Optimization [54.59133974444805]
TexRO is a novel method for generating delicate textures of a known 3D mesh by optimizing its UV texture.
We demonstrate the superior performance of TexRO in terms of texture quality, detail preservation, visual consistency, and, notably runtime speed.
arXiv Detail & Related papers (2024-03-22T07:45:51Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - SceneTex: High-Quality Texture Synthesis for Indoor Scenes via Diffusion
Priors [49.03627933561738]
SceneTex is a novel method for generating high-quality and style-consistent textures for indoor scenes using depth-to-image diffusion priors.
SceneTex enables various and accurate texture synthesis for 3D-FRONT scenes, demonstrating significant improvements in visual quality and prompt fidelity over the prior texture generation methods.
arXiv Detail & Related papers (2023-11-28T22:49:57Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.