Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering
- URL: http://arxiv.org/abs/2312.11360v2
- Date: Tue, 7 May 2024 13:15:47 GMT
- Title: Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering
- Authors: Kim Youwang, Tae-Hyun Oh, Gerard Pons-Moll,
- Abstract summary: Paint-it is a text-driven high-fidelity texture map synthesis method for 3D rendering.
Paint-it synthesizes texture maps from a text description by synthesis-through-optimization, exploiting the Score-Distillation Sampling (SDS)
We show that DC-PBR inherently schedules the optimization curriculum according to texture frequency and naturally filters out the noisy signals from SDS.
- Score: 47.78392889256976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Paint-it, a text-driven high-fidelity texture map synthesis method for 3D meshes via neural re-parameterized texture optimization. Paint-it synthesizes texture maps from a text description by synthesis-through-optimization, exploiting the Score-Distillation Sampling (SDS). We observe that directly applying SDS yields undesirable texture quality due to its noisy gradients. We reveal the importance of texture parameterization when using SDS. Specifically, we propose Deep Convolutional Physically-Based Rendering (DC-PBR) parameterization, which re-parameterizes the physically-based rendering (PBR) texture maps with randomly initialized convolution-based neural kernels, instead of a standard pixel-based parameterization. We show that DC-PBR inherently schedules the optimization curriculum according to texture frequency and naturally filters out the noisy signals from SDS. In experiments, Paint-it obtains remarkable quality PBR texture maps within 15 min., given only a text description. We demonstrate the generalizability and practicality of Paint-it by synthesizing high-quality texture maps for large-scale mesh datasets and showing test-time applications such as relighting and material control using a popular graphics engine. Project page: https://kim-youwang.github.io/paint-it
Related papers
- TEXGen: a Generative Diffusion Model for Mesh Textures [63.43159148394021]
We focus on the fundamental problem of learning in the UV texture space itself.
We propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds.
We train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images.
arXiv Detail & Related papers (2024-11-22T05:22:11Z) - TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and Resampling [37.67373829836975]
We present TexGen, a novel multi-view sampling and resampling framework for texture generation.
Our proposed method produces significantly better texture quality for diverse 3D objects with a high degree of view consistency.
Our proposed texture generation technique can also be applied to texture editing while preserving the original identity.
arXiv Detail & Related papers (2024-08-02T14:24:40Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - GenesisTex: Adapting Image Denoising Diffusion to Texture Space [15.907134430301133]
GenesisTex is a novel method for synthesizing textures for 3D geometries from text descriptions.
We maintain a latent texture map for each viewpoint, which is updated with predicted noise on the rendering of the corresponding viewpoint.
Global consistency is achieved through the integration of style consistency mechanisms within the noise prediction network.
arXiv Detail & Related papers (2024-03-26T15:15:15Z) - FlashTex: Fast Relightable Mesh Texturing with LightControlNet [105.4683880648901]
We introduce LightControlNet, a new text-to-image model based on the ControlNet architecture.
We apply our approach to disentangle material/reflectance in the resulting texture so that the mesh can be properlylit and rendered in any lighting environment.
arXiv Detail & Related papers (2024-02-20T18:59:00Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - SceneTex: High-Quality Texture Synthesis for Indoor Scenes via Diffusion
Priors [49.03627933561738]
SceneTex is a novel method for generating high-quality and style-consistent textures for indoor scenes using depth-to-image diffusion priors.
SceneTex enables various and accurate texture synthesis for 3D-FRONT scenes, demonstrating significant improvements in visual quality and prompt fidelity over the prior texture generation methods.
arXiv Detail & Related papers (2023-11-28T22:49:57Z) - On Demand Solid Texture Synthesis Using Deep 3D Networks [3.1542695050861544]
This paper describes a novel approach for on demand texture synthesis based on a deep learning framework.
A generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes.
The synthesized volumes have good visual results that are at least equivalent to the state-of-the-art patch based approaches.
arXiv Detail & Related papers (2020-01-13T20:59:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.