PBR-SR: Mesh PBR Texture Super Resolution from 2D Image Priors
- URL: http://arxiv.org/abs/2506.02846v1
- Date: Tue, 03 Jun 2025 13:15:34 GMT
- Title: PBR-SR: Mesh PBR Texture Super Resolution from 2D Image Priors
- Authors: Yujin Chen, Yinyu Nie, Benjamin Ummenhofer, Reiner Birkl, Michael Paulitsch, Matthias Nießner,
- Abstract summary: PBR-SR is a novel method for physically based rendering (PBR) texture super resolution (SR)<n>It outputs high-resolution, high-quality PBR textures from low-resolution (LR) PBR input in a zero-shot manner.
- Score: 52.28858915766172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present PBR-SR, a novel method for physically based rendering (PBR) texture super resolution (SR). It outputs high-resolution, high-quality PBR textures from low-resolution (LR) PBR input in a zero-shot manner. PBR-SR leverages an off-the-shelf super-resolution model trained on natural images, and iteratively minimizes the deviations between super-resolution priors and differentiable renderings. These enhancements are then back-projected into the PBR map space in a differentiable manner to produce refined, high-resolution textures. To mitigate view inconsistencies and lighting sensitivity, which is common in view-based super-resolution, our method applies 2D prior constraints across multi-view renderings, iteratively refining the shared, upscaled textures. In parallel, we incorporate identity constraints directly in the PBR texture domain to ensure the upscaled textures remain faithful to the LR input. PBR-SR operates without any additional training or data requirements, relying entirely on pretrained image priors. We demonstrate that our approach produces high-fidelity PBR textures for both artist-designed and AI-generated meshes, outperforming both direct SR models application and prior texture optimization methods. Our results show high-quality outputs in both PBR and rendering evaluations, supporting advanced applications such as relighting.
Related papers
- PacTure: Efficient PBR Texture Generation on Packed Views with Visual Autoregressive Models [73.4445896872942]
PacTure is a framework for generating physically-based rendering (PBR) material textures from an un-domain 3D mesh.<n>We introduce view packing, a novel technique that increases the effective resolution for each view.
arXiv Detail & Related papers (2025-05-28T14:23:30Z) - IntrinsiX: High-Quality PBR Generation using Image Priors [49.90007540430264]
We introduce IntrinsiX, a novel method that generates high-quality intrinsic images from text description.<n>In contrast to existing text-to-image models whose outputs contain baked-in scene lighting, our approach predicts physically-based rendering (PBR) maps.
arXiv Detail & Related papers (2025-04-01T17:47:48Z) - PBR3DGen: A VLM-guided Mesh Generation with High-quality PBR Texture [9.265778497001843]
We present PBR3DGen, a two-stage mesh generation method with high-quality PBR materials.<n>We leverage vision language models (VLM) to guide multi-view diffusion, precisely capturing the spatial distribution and inherent attributes of reflective-metalness material.<n>Our reconstruction model reconstructs high-quality mesh with PBR materials.
arXiv Detail & Related papers (2025-03-14T13:11:19Z) - Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering [47.78392889256976]
Paint-it is a text-driven high-fidelity texture map synthesis method for 3D rendering.
Paint-it synthesizes texture maps from a text description by synthesis-through-optimization, exploiting the Score-Distillation Sampling (SDS)
We show that DC-PBR inherently schedules the optimization curriculum according to texture frequency and naturally filters out the noisy signals from SDS.
arXiv Detail & Related papers (2023-12-18T17:17:08Z) - Blind Image Super-resolution with Rich Texture-Aware Codebooks [12.608418657067947]
Blind super-resolution (BSR) methods based on high-resolution (HR) reconstruction codebooks have achieved promising results in recent years.
We find that a codebook based on HR reconstruction may not effectively capture the complex correlations between low-resolution (LR) and HR images.
We propose the Rich Texture-aware Codebook-based Network (RTCNet), which consists of the Degradation-robust Texture Prior Module (DTPM) and the Patch-aware Texture Prior Module (PTPM)
arXiv Detail & Related papers (2023-10-26T07:00:18Z) - Learning Texture Transformer Network for Image Super-Resolution [47.86443447491344]
We propose a Texture Transformer Network for Image Super-Resolution (TTSR)
TTSR consists of four closely-related modules optimized for image generation tasks.
TTSR achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2020-06-07T12:55:34Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.