CasTex: Cascaded Text-to-Texture Synthesis via Explicit Texture Maps and Physically-Based Shading
- URL: http://arxiv.org/abs/2504.06856v1
- Date: Wed, 09 Apr 2025 13:08:30 GMT
- Title: CasTex: Cascaded Text-to-Texture Synthesis via Explicit Texture Maps and Physically-Based Shading
- Authors: Mishan Aliev, Dmitry Baranchuk, Kirill Struminsky,
- Abstract summary: We aim to achieve realistic model appearances under varying lighting conditions.<n>In our setup, score distillation sampling yields high-quality textures out-of-the box.
- Score: 7.851991808404223
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work investigates text-to-texture synthesis using diffusion models to generate physically-based texture maps. We aim to achieve realistic model appearances under varying lighting conditions. A prominent solution for the task is score distillation sampling. It allows recovering a complex texture using gradient guidance given a differentiable rasterization and shading pipeline. However, in practice, the aforementioned solution in conjunction with the widespread latent diffusion models produces severe visual artifacts and requires additional regularization such as implicit texture parameterization. As a more direct alternative, we propose an approach using cascaded diffusion models for texture synthesis (CasTex). In our setup, score distillation sampling yields high-quality textures out-of-the box. In particular, we were able to omit implicit texture parameterization in favor of an explicit parameterization to improve the procedure. In the experiments, we show that our approach significantly outperforms state-of-the-art optimization-based solutions on public texture synthesis benchmarks.
Related papers
- NeRF-Texture: Synthesizing Neural Radiance Field Textures [77.24205024987414]
We propose a novel texture synthesis method with Neural Radiance Fields (NeRF) to capture and synthesize textures from given multi-view images.<n>In the proposed NeRF texture representation, a scene with fine geometric details is disentangled into the meso-structure textures and the underlying base shape.<n>We can synthesize NeRF-based textures through patch matching of latent features.
arXiv Detail & Related papers (2024-12-13T09:41:48Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - GenesisTex: Adapting Image Denoising Diffusion to Texture Space [15.907134430301133]
GenesisTex is a novel method for synthesizing textures for 3D geometries from text descriptions.
We maintain a latent texture map for each viewpoint, which is updated with predicted noise on the rendering of the corresponding viewpoint.
Global consistency is achieved through the integration of style consistency mechanisms within the noise prediction network.
arXiv Detail & Related papers (2024-03-26T15:15:15Z) - Generating Non-Stationary Textures using Self-Rectification [70.91414475376698]
This paper addresses the challenge of example-based non-stationary texture synthesis.
We introduce a novel twostep approach wherein users first modify a reference texture using standard image editing tools.
Our proposed method, termed "self-rectification", automatically refines this target into a coherent, seamless texture.
arXiv Detail & Related papers (2024-01-05T15:07:05Z) - Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering [47.78392889256976]
Paint-it is a text-driven high-fidelity texture map synthesis method for 3D rendering.
Paint-it synthesizes texture maps from a text description by synthesis-through-optimization, exploiting the Score-Distillation Sampling (SDS)
We show that DC-PBR inherently schedules the optimization curriculum according to texture frequency and naturally filters out the noisy signals from SDS.
arXiv Detail & Related papers (2023-12-18T17:17:08Z) - SceneTex: High-Quality Texture Synthesis for Indoor Scenes via Diffusion
Priors [49.03627933561738]
SceneTex is a novel method for generating high-quality and style-consistent textures for indoor scenes using depth-to-image diffusion priors.
SceneTex enables various and accurate texture synthesis for 3D-FRONT scenes, demonstrating significant improvements in visual quality and prompt fidelity over the prior texture generation methods.
arXiv Detail & Related papers (2023-11-28T22:49:57Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - Diffusion-based Holistic Texture Rectification and Synthesis [26.144666226217062]
Traditional texture synthesis approaches focus on generating textures from pristine samples.
We propose a framework that synthesizes holistic textures from degraded samples in natural images.
arXiv Detail & Related papers (2023-09-26T08:44:46Z) - SeamlessGAN: Self-Supervised Synthesis of Tileable Texture Maps [3.504542161036043]
We present SeamlessGAN, a method capable of automatically generating tileable texture maps from a single input exemplar.
In contrast to most existing methods, focused solely on solving the synthesis problem, our work tackles both problems, synthesis and tileability, simultaneously.
arXiv Detail & Related papers (2022-01-13T18:24:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.