TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion
- URL: http://arxiv.org/abs/2401.09416v1
- Date: Wed, 17 Jan 2024 18:55:49 GMT
- Title: TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion
- Authors: Yu-Ying Yeh, Jia-Bin Huang, Changil Kim, Lei Xiao, Thu Nguyen-Phuoc,
Numair Khan, Cheng Zhang, Manmohan Chandraker, Carl S Marshall, Zhao Dong,
Zhengqin Li
- Abstract summary: TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
- Score: 64.49276500129092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present TextureDreamer, a novel image-guided texture synthesis method to
transfer relightable textures from a small number of input images (3 to 5) to
target 3D shapes across arbitrary categories. Texture creation is a pivotal
challenge in vision and graphics. Industrial companies hire experienced artists
to manually craft textures for 3D assets. Classical methods require densely
sampled views and accurately aligned geometry, while learning-based methods are
confined to category-specific shapes within the dataset. In contrast,
TextureDreamer can transfer highly detailed, intricate textures from real-world
environments to arbitrary objects with only a few casually captured images,
potentially significantly democratizing texture creation. Our core idea,
personalized geometry-aware score distillation (PGSD), draws inspiration from
recent advancements in diffuse models, including personalized modeling for
texture information extraction, variational score distillation for detailed
appearance synthesis, and explicit geometry guidance with ControlNet. Our
integration and several essential modifications substantially improve the
texture quality. Experiments on real images spanning different categories show
that TextureDreamer can successfully transfer highly realistic, semantic
meaningful texture to arbitrary objects, surpassing the visual quality of
previous state-of-the-art.
Related papers
- Textured Mesh Saliency: Bridging Geometry and Texture for Human Perception in 3D Graphics [50.23625950905638]
We present a new dataset for textured mesh saliency, created through an innovative eye-tracking experiment in a six degrees of freedom (6-DOF) VR environment.
Our proposed model predicts saliency maps for textured mesh surfaces by treating each triangular face as an individual unit and assigning a saliency density value to reflect the importance of each local surface region.
arXiv Detail & Related papers (2024-12-11T08:27:33Z) - Tactile DreamFusion: Exploiting Tactile Sensing for 3D Generation [39.702921832009466]
We introduce a new method that incorporates touch as an additional modality to improve the geometric details of generated 3D assets.
We design a lightweight 3D texture field to synthesize visual and tactile textures, guided by 2D diffusion model priors.
We are the first to leverage high-resolution tactile sensing to enhance geometric details for 3D generation tasks.
arXiv Detail & Related papers (2024-12-09T18:59:45Z) - FabricDiffusion: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Clothing Images [56.63824638417697]
FabricDiffusion is a method for transferring fabric textures from a single clothing image to 3D garments of arbitrary shapes.
We show that FabricDiffusion can transfer various features from a single clothing image including texture patterns, material properties, and detailed prints and logos.
arXiv Detail & Related papers (2024-10-02T17:57:12Z) - Meta 3D TextureGen: Fast and Consistent Texture Generation for 3D Objects [54.80813150893719]
We introduce Meta 3D TextureGen: a new feedforward method comprised of two sequential networks aimed at generating high-quality textures in less than 20 seconds.
Our method state-of-the-art results in quality and speed by conditioning a text-to-image model on 3D semantics in 2D space and fusing them into a complete and high-resolution UV texture map.
In addition, we introduce a texture enhancement network that is capable of up-scaling any texture by an arbitrary ratio, producing 4k pixel resolution textures.
arXiv Detail & Related papers (2024-07-02T17:04:34Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - TwinTex: Geometry-aware Texture Generation for Abstracted 3D
Architectural Models [13.248386665044087]
We present TwinTex, the first automatic texture mapping framework to generate a photo-realistic texture for a piece-wise planar proxy.
Our approach surpasses state-of-the-art texture mapping methods in terms of high-fidelity quality and reaches a human-expert production level with much less effort.
arXiv Detail & Related papers (2023-09-20T12:33:53Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Projective Urban Texturing [8.349665441428925]
We propose a method for automatic generation of textures for 3D city meshes in immersive urban environments.
Projective Urban Texturing (PUT) re-targets textural style from real-world panoramic images to unseen urban meshes.
PUT relies on contrastive and adversarial training of a neural architecture designed for unpaired image-to-texture translation.
arXiv Detail & Related papers (2022-01-25T14:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.