TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models
- URL: http://arxiv.org/abs/2310.13772v1
- Date: Fri, 20 Oct 2023 19:15:29 GMT
- Title: TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models
- Authors: Tianshi Cao, Karsten Kreis, Sanja Fidler, Nicholas Sharp, Kangxue Yin
- Abstract summary: We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
- Score: 77.85129451435704
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present TexFusion (Texture Diffusion), a new method to synthesize textures
for given 3D geometries, using large-scale text-guided image diffusion models.
In contrast to recent works that leverage 2D text-to-image diffusion models to
distill 3D objects using a slow and fragile optimization process, TexFusion
introduces a new 3D-consistent generation technique specifically designed for
texture synthesis that employs regular diffusion model sampling on different 2D
rendered views. Specifically, we leverage latent diffusion models, apply the
diffusion model's denoiser on a set of 2D renders of the 3D object, and
aggregate the different denoising predictions on a shared latent texture map.
Final output RGB textures are produced by optimizing an intermediate neural
color field on the decodings of 2D renders of the latent texture. We thoroughly
validate TexFusion and show that we can efficiently generate diverse, high
quality and globally coherent textures. We achieve state-of-the-art text-guided
texture synthesis performance using only image diffusion models, while avoiding
the pitfalls of previous distillation-based methods. The text-conditioning
offers detailed control and we also do not rely on any ground truth 3D textures
for training. This makes our method versatile and applicable to a broad range
of geometry and texture types. We hope that TexFusion will advance AI-based
texturing of 3D assets for applications in virtual reality, game design,
simulation, and more.
Related papers
- Tex4D: Zero-shot 4D Scene Texturing with Video Diffusion Models [54.35214051961381]
3D meshes are widely used in computer vision and graphics for their efficiency in animation and minimal memory use in movies, games, AR, and VR.
However, creating temporal consistent and realistic textures for mesh remains labor-intensive for professional artists.
We present 3D Tex sequences that integrates inherent geometry from mesh sequences with video diffusion models to produce consistent textures.
arXiv Detail & Related papers (2024-10-14T17:59:59Z) - Enhancing Single Image to 3D Generation using Gaussian Splatting and Hybrid Diffusion Priors [17.544733016978928]
3D object generation from a single image involves estimating the full 3D geometry and texture of unseen views from an unposed RGB image captured in the wild.
Recent advancements in 3D object generation have introduced techniques that reconstruct an object's 3D shape and texture.
We propose bridging the gap between 2D and 3D diffusion models to address this limitation.
arXiv Detail & Related papers (2024-10-12T10:14:11Z) - VCD-Texture: Variance Alignment based 3D-2D Co-Denoising for Text-Guided Texturing [22.39760469467524]
We propose a Variance texture synthesis to address the modal gap between the 2D and 3D diffusion models.
We present an inpainting module to improve details with conflicting regions.
arXiv Detail & Related papers (2024-07-05T12:11:33Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - EucliDreamer: Fast and High-Quality Texturing for 3D Models with Depth-Conditioned Stable Diffusion [5.158983929861116]
We present EucliDreamer, a simple and effective method to generate textures for 3D models given text and prompts.
The texture is parametized as an implicit function on the 3D surface, which is optimized with the Score Distillation Sampling (SDS) process and differentiable rendering.
arXiv Detail & Related papers (2024-04-16T04:44:16Z) - ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image
Collections [71.46546520120162]
Estimating 3D articulated shapes like animal bodies from monocular images is inherently challenging.
We propose ARTIC3D, a self-supervised framework to reconstruct per-instance 3D shapes from a sparse image collection in-the-wild.
We produce realistic animations by fine-tuning the rendered shape and texture under rigid part transformations.
arXiv Detail & Related papers (2023-06-07T17:47:50Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - DreamFusion: Text-to-3D using 2D Diffusion [52.52529213936283]
Recent breakthroughs in text-to-image synthesis have been driven by diffusion models trained on billions of image-text pairs.
In this work, we circumvent these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis.
Our approach requires no 3D training data and no modifications to the image diffusion model, demonstrating the effectiveness of pretrained image diffusion models as priors.
arXiv Detail & Related papers (2022-09-29T17:50:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.