MatE: Material Extraction from Single-Image via Geometric Prior
- URL: http://arxiv.org/abs/2512.18312v1
- Date: Sat, 20 Dec 2025 10:53:49 GMT
- Title: MatE: Material Extraction from Single-Image via Geometric Prior
- Authors: Zeyu Zhang, Wei Zhai, Jian Yang, Yang Cao,
- Abstract summary: MatE is a novel method for generating tileable PBR materials from a single image taken under unconstrained, real-world conditions.<n>We demonstrate the efficacy and robustness of our approach, enabling users to create realistic materials from real-world image.
- Score: 36.8533172704247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The creation of high-fidelity, physically-based rendering (PBR) materials remains a bottleneck in many graphics pipelines, typically requiring specialized equipment and expert-driven post-processing. To democratize this process, we present MatE, a novel method for generating tileable PBR materials from a single image taken under unconstrained, real-world conditions. Given an image and a user-provided mask, MatE first performs coarse rectification using an estimated depth map as a geometric prior, and then employs a dual-branch diffusion model. Leveraging a learned consistency from rotation-aligned and scale-aligned training data, this model further rectify residual distortions from the coarse result and translate it into a complete set of material maps, including albedo, normal, roughness and height. Our framework achieves invariance to the unknown illumination and perspective of the input image, allowing for the recovery of intrinsic material properties from casual captures. Through comprehensive experiments on both synthetic and real-world data, we demonstrate the efficacy and robustness of our approach, enabling users to create realistic materials from real-world image.
Related papers
- Image2Garment: Simulation-ready Garment Generation from a Single Image [52.37273643091814]
We propose a vision-language model to infer material composition and fabric attributes from real images.<n>We then train a lightweight predictor that maps these attributes to the corresponding physical fabric parameters.<n> Experiments show that our estimator achieves superior accuracy in material composition estimation and fabric attribute prediction.
arXiv Detail & Related papers (2026-01-14T17:47:33Z) - MaterialRefGS: Reflective Gaussian Splatting with Multi-view Consistent Material Inference [83.38607296779423]
We show that multi-view consistent material inference with more physically-based environment modeling is key to learning accurate reflections with Gaussian Splatting.<n>Our method faithfully recovers both illumination and geometry, achieving state-of-the-art rendering quality in novel views synthesis.
arXiv Detail & Related papers (2025-10-13T13:29:20Z) - Chord: Chain of Rendering Decomposition for PBR Material Estimation from Generated Texture Images [10.46170854352924]
We propose a novel two-stage generate-and-estimate framework for PBR material generation.<n>In the generation stage, a fine-tuned diffusion model synthesizes shaded, tileable texture images aligned with user input.<n>In the estimation stage, we introduce a chained decomposition scheme that sequentially predicts SVBRDF channels by passing previously extracted representation as input into a single-step image-conditional diffusion model.
arXiv Detail & Related papers (2025-09-12T04:03:07Z) - MatDecompSDF: High-Fidelity 3D Shape and PBR Material Decomposition from Multi-View Images [20.219010684946888]
MatDecompSDF is a framework for recovering high-fidelity 3D shapes and decomposing their physically-based material properties from multi-view images.<n>Our method produces editable and relightable assets that can be seamlessly integrated into standard graphics pipelines.
arXiv Detail & Related papers (2025-07-07T08:22:32Z) - MatCLIP: Light- and Shape-Insensitive Assignment of PBR Material Models [42.42328559042189]
MatCLIP is a novel method that extracts shape- and lighting-insensitive descriptors of PBR materials to assign plausible textures to 3D objects based on images.<n>By extending an Alpha-CLIP-based model on material renderings across diverse shapes and lighting, our approach generates descriptors that bridge the domains of PBR representations with photographs or renderings.<n>MatCLIP achieves a top-1 classification accuracy of 76.6%, outperforming state-of-the-art methods such as PhotoShape and MatAtlas.
arXiv Detail & Related papers (2025-01-27T12:08:52Z) - Materialist: Physically Based Editing Using Single-Image Inverse Rendering [47.85234717907478]
Materialist is a method combining a learning-based approach with physically based progressive differentiable rendering.<n>Our approach enables a range of applications, including material editing, object insertion, and relighting.<n> Experiments demonstrate strong performance across synthetic and real-world datasets.
arXiv Detail & Related papers (2025-01-07T11:52:01Z) - IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and Illuminations [64.07859467542664]
Capturing geometric and material information from images remains a fundamental challenge in computer vision and graphics.<n>Traditional optimization-based methods often require hours of computational time to reconstruct geometry, material properties, and environmental lighting from dense multi-view inputs.<n>We introduce IDArb, a diffusion-based model designed to perform intrinsic decomposition on an arbitrary number of images under varying illuminations.
arXiv Detail & Related papers (2024-12-16T18:52:56Z) - IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination [37.96484120807323]
This paper aims to recover object materials from posed images captured under an unknown static lighting condition.
We learn the material prior with a generative model for regularizing the optimization process.
Experiments on real-world and synthetic datasets demonstrate that our approach achieves state-of-the-art performance on material recovery.
arXiv Detail & Related papers (2024-04-17T17:45:08Z) - Intrinsic Image Diffusion for Indoor Single-view Material Estimation [55.276815106443976]
We present Intrinsic Image Diffusion, a generative model for appearance decomposition of indoor scenes.
Given a single input view, we sample multiple possible material explanations represented as albedo, roughness, and metallic maps.
Our method produces significantly sharper, more consistent, and more detailed materials, outperforming state-of-the-art methods by $1.5dB$ on PSNR and by $45%$ better FID score on albedo prediction.
arXiv Detail & Related papers (2023-12-19T15:56:19Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - MaterialGAN: Reflectance Capture using a Generative SVBRDF Model [33.578080406338266]
We present MaterialGAN, a deep generative convolutional network based on StyleGAN2.
We show that MaterialGAN can be used as a powerful material prior in an inverse rendering framework.
We demonstrate this framework on the task of reconstructing SVBRDFs from images captured under flash illumination using a hand-held mobile phone.
arXiv Detail & Related papers (2020-09-30T21:33:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.