TEXTure: Text-Guided Texturing of 3D Shapes
- URL: http://arxiv.org/abs/2302.01721v1
- Date: Fri, 3 Feb 2023 13:18:45 GMT
- Title: TEXTure: Text-Guided Texturing of 3D Shapes
- Authors: Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel
Cohen-Or
- Abstract summary: We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
- Score: 71.13116133846084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present TEXTure, a novel method for text-guided generation,
editing, and transfer of textures for 3D shapes. Leveraging a pretrained
depth-to-image diffusion model, TEXTure applies an iterative scheme that paints
a 3D model from different viewpoints. Yet, while depth-to-image models can
create plausible textures from a single viewpoint, the stochastic nature of the
generation process can cause many inconsistencies when texturing an entire 3D
object. To tackle these problems, we dynamically define a trimap partitioning
of the rendered image into three progression states, and present a novel
elaborated diffusion sampling process that uses this trimap representation to
generate seamless textures from different views. We then show that one can
transfer the generated texture maps to new 3D geometries without requiring
explicit surface-to-surface mapping, as well as extract semantic textures from
a set of images without requiring any explicit reconstruction. Finally, we show
that TEXTure can be used to not only generate new textures but also edit and
refine existing textures using either a text prompt or user-provided scribbles.
We demonstrate that our TEXTuring method excels at generating, transferring,
and editing textures through extensive evaluation, and further close the gap
between 2D image generation and 3D texturing.
Related papers
- Meta 3D TextureGen: Fast and Consistent Texture Generation for 3D Objects [54.80813150893719]
We introduce Meta 3D TextureGen: a new feedforward method comprised of two sequential networks aimed at generating high-quality textures in less than 20 seconds.
Our method state-of-the-art results in quality and speed by conditioning a text-to-image model on 3D semantics in 2D space and fusing them into a complete and high-resolution UV texture map.
In addition, we introduce a texture enhancement network that is capable of up-scaling any texture by an arbitrary ratio, producing 4k pixel resolution textures.
arXiv Detail & Related papers (2024-07-02T17:04:34Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models [21.622420436349245]
We present Text2Room, a method for generating room-scale textured 3D meshes from a given text prompt as input.
We leverage pre-trained 2D text-to-image models to synthesize a sequence of images from different poses.
In order to lift these outputs into a consistent 3D scene representation, we combine monocular depth estimation with a text-conditioned inpainting model.
arXiv Detail & Related papers (2023-03-21T16:21:02Z) - Projective Urban Texturing [8.349665441428925]
We propose a method for automatic generation of textures for 3D city meshes in immersive urban environments.
Projective Urban Texturing (PUT) re-targets textural style from real-world panoramic images to unseen urban meshes.
PUT relies on contrastive and adversarial training of a neural architecture designed for unpaired image-to-texture translation.
arXiv Detail & Related papers (2022-01-25T14:56:52Z) - NeuTex: Neural Texture Mapping for Volumetric Neural Rendering [48.83181790635772]
We present an approach that explicitly disentangles geometry--represented as a continuous 3D volume--from appearance--represented as a continuous 2D texture map.
We demonstrate that this representation can be reconstructed using only multi-view image supervision and generates high-quality rendering results.
arXiv Detail & Related papers (2021-03-01T05:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.