TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and Resampling
- URL: http://arxiv.org/abs/2408.01291v1
- Date: Fri, 2 Aug 2024 14:24:40 GMT
- Title: TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and Resampling
- Authors: Dong Huo, Zixin Guo, Xinxin Zuo, Zhihao Shi, Juwei Lu, Peng Dai, Songcen Xu, Li Cheng, Yee-Hong Yang,
- Abstract summary: We present TexGen, a novel multi-view sampling and resampling framework for texture generation.
Our proposed method produces significantly better texture quality for diverse 3D objects with a high degree of view consistency.
Our proposed texture generation technique can also be applied to texture editing while preserving the original identity.
- Score: 37.67373829836975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a 3D mesh, we aim to synthesize 3D textures that correspond to arbitrary textual descriptions. Current methods for generating and assembling textures from sampled views often result in prominent seams or excessive smoothing. To tackle these issues, we present TexGen, a novel multi-view sampling and resampling framework for texture generation leveraging a pre-trained text-to-image diffusion model. For view consistent sampling, first of all we maintain a texture map in RGB space that is parameterized by the denoising step and updated after each sampling step of the diffusion model to progressively reduce the view discrepancy. An attention-guided multi-view sampling strategy is exploited to broadcast the appearance information across views. To preserve texture details, we develop a noise resampling technique that aids in the estimation of noise, generating inputs for subsequent denoising steps, as directed by the text prompt and current texture map. Through an extensive amount of qualitative and quantitative evaluations, we demonstrate that our proposed method produces significantly better texture quality for diverse 3D objects with a high degree of view consistency and rich appearance details, outperforming current state-of-the-art methods. Furthermore, our proposed texture generation technique can also be applied to texture editing while preserving the original identity. More experimental results are available at https://dong-huo.github.io/TexGen/
Related papers
- Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - GenesisTex: Adapting Image Denoising Diffusion to Texture Space [15.907134430301133]
GenesisTex is a novel method for synthesizing textures for 3D geometries from text descriptions.
We maintain a latent texture map for each viewpoint, which is updated with predicted noise on the rendering of the corresponding viewpoint.
Global consistency is achieved through the integration of style consistency mechanisms within the noise prediction network.
arXiv Detail & Related papers (2024-03-26T15:15:15Z) - TexRO: Generating Delicate Textures of 3D Models by Recursive Optimization [54.59133974444805]
TexRO is a novel method for generating delicate textures of a known 3D mesh by optimizing its UV texture.
We demonstrate the superior performance of TexRO in terms of texture quality, detail preservation, visual consistency, and, notably runtime speed.
arXiv Detail & Related papers (2024-03-22T07:45:51Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - SceneTex: High-Quality Texture Synthesis for Indoor Scenes via Diffusion
Priors [49.03627933561738]
SceneTex is a novel method for generating high-quality and style-consistent textures for indoor scenes using depth-to-image diffusion priors.
SceneTex enables various and accurate texture synthesis for 3D-FRONT scenes, demonstrating significant improvements in visual quality and prompt fidelity over the prior texture generation methods.
arXiv Detail & Related papers (2023-11-28T22:49:57Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - Text-guided High-definition Consistency Texture Model [0.0]
We present the High-definition Consistency Texture Model (HCTM), a novel method that can generate high-definition textures for 3D meshes according to the text prompts.
We achieve this by leveraging a pre-trained depth-to-image diffusion model to generate single viewpoint results based on the text prompt and a depth map.
Our proposed approach has demonstrated promising results in generating high-definition and consistent textures for 3D meshes.
arXiv Detail & Related papers (2023-05-10T05:09:05Z) - Text2Tex: Text-driven Texture Synthesis via Diffusion Models [31.773823357617093]
We present Text2Tex, a novel method for generating high-quality textures for 3D meshes from text prompts.
Our method incorporates inpainting into a pre-trained depth-aware image diffusion model to progressively synthesize high resolution partial textures from multiple viewpoints.
arXiv Detail & Related papers (2023-03-20T19:02:13Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.