TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion
- URL: http://arxiv.org/abs/2401.09416v1
- Date: Wed, 17 Jan 2024 18:55:49 GMT
- Title: TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion
- Authors: Yu-Ying Yeh, Jia-Bin Huang, Changil Kim, Lei Xiao, Thu Nguyen-Phuoc,
Numair Khan, Cheng Zhang, Manmohan Chandraker, Carl S Marshall, Zhao Dong,
Zhengqin Li
- Abstract summary: TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
- Score: 64.49276500129092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present TextureDreamer, a novel image-guided texture synthesis method to
transfer relightable textures from a small number of input images (3 to 5) to
target 3D shapes across arbitrary categories. Texture creation is a pivotal
challenge in vision and graphics. Industrial companies hire experienced artists
to manually craft textures for 3D assets. Classical methods require densely
sampled views and accurately aligned geometry, while learning-based methods are
confined to category-specific shapes within the dataset. In contrast,
TextureDreamer can transfer highly detailed, intricate textures from real-world
environments to arbitrary objects with only a few casually captured images,
potentially significantly democratizing texture creation. Our core idea,
personalized geometry-aware score distillation (PGSD), draws inspiration from
recent advancements in diffuse models, including personalized modeling for
texture information extraction, variational score distillation for detailed
appearance synthesis, and explicit geometry guidance with ControlNet. Our
integration and several essential modifications substantially improve the
texture quality. Experiments on real images spanning different categories show
that TextureDreamer can successfully transfer highly realistic, semantic
meaningful texture to arbitrary objects, surpassing the visual quality of
previous state-of-the-art.
Related papers
- Meta 3D TextureGen: Fast and Consistent Texture Generation for 3D Objects [54.80813150893719]
We introduce Meta 3D TextureGen: a new feedforward method comprised of two sequential networks aimed at generating high-quality textures in less than 20 seconds.
Our method state-of-the-art results in quality and speed by conditioning a text-to-image model on 3D semantics in 2D space and fusing them into a complete and high-resolution UV texture map.
In addition, we introduce a texture enhancement network that is capable of up-scaling any texture by an arbitrary ratio, producing 4k pixel resolution textures.
arXiv Detail & Related papers (2024-07-02T17:04:34Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - TwinTex: Geometry-aware Texture Generation for Abstracted 3D
Architectural Models [13.248386665044087]
We present TwinTex, the first automatic texture mapping framework to generate a photo-realistic texture for a piece-wise planar proxy.
Our approach surpasses state-of-the-art texture mapping methods in terms of high-fidelity quality and reaches a human-expert production level with much less effort.
arXiv Detail & Related papers (2023-09-20T12:33:53Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - Mesh2Tex: Generating Mesh Textures from Image Queries [45.32242590651395]
In particular, textured stage textures from images of real objects match real images observations.
We present Mesh2Tex, which learns object geometry from uncorrelated collections of 3D object geometry.
arXiv Detail & Related papers (2023-04-12T13:58:25Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Projective Urban Texturing [8.349665441428925]
We propose a method for automatic generation of textures for 3D city meshes in immersive urban environments.
Projective Urban Texturing (PUT) re-targets textural style from real-world panoramic images to unseen urban meshes.
PUT relies on contrastive and adversarial training of a neural architecture designed for unpaired image-to-texture translation.
arXiv Detail & Related papers (2022-01-25T14:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.