Intrinsic Image Fusion for Multi-View 3D Material Reconstruction
- URL: http://arxiv.org/abs/2512.13157v1
- Date: Mon, 15 Dec 2025 10:05:59 GMT
- Title: Intrinsic Image Fusion for Multi-View 3D Material Reconstruction
- Authors: Peter Kocsis, Lukas Höllein, Matthias Nießner,
- Abstract summary: We introduce Intrinsic Image Fusion, a method that reconstructs high-quality physically based materials from multi-view images.<n>Our results outperform state-of-the-art methods in material disentanglement on both synthetic and real scenes.
- Score: 49.43509537480623
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce Intrinsic Image Fusion, a method that reconstructs high-quality physically based materials from multi-view images. Material reconstruction is highly underconstrained and typically relies on analysis-by-synthesis, which requires expensive and noisy path tracing. To better constrain the optimization, we incorporate single-view priors into the reconstruction process. We leverage a diffusion-based material estimator that produces multiple, but often inconsistent, candidate decompositions per view. To reduce the inconsistency, we fit an explicit low-dimensional parametric function to the predictions. We then propose a robust optimization framework using soft per-view prediction selection together with confidence-based soft multi-view inlier set to fuse the most consistent predictions of the most confident views into a consistent parametric material space. Finally, we use inverse path tracing to optimize for the low-dimensional parameters. Our results outperform state-of-the-art methods in material disentanglement on both synthetic and real scenes, producing sharp and clean reconstructions suitable for high-quality relighting.
Related papers
- MatMart: Material Reconstruction of 3D Objects via Diffusion [36.79338202811421]
tt achieves superior performance in material reconstruction compared to existing methods.<n>tt achieves both material prediction and generation capabilities through end-to-end optimization of a single diffusion model.
arXiv Detail & Related papers (2025-11-24T08:58:14Z) - MaterialRefGS: Reflective Gaussian Splatting with Multi-view Consistent Material Inference [83.38607296779423]
We show that multi-view consistent material inference with more physically-based environment modeling is key to learning accurate reflections with Gaussian Splatting.<n>Our method faithfully recovers both illumination and geometry, achieving state-of-the-art rendering quality in novel views synthesis.
arXiv Detail & Related papers (2025-10-13T13:29:20Z) - GS-2M: Gaussian Splatting for Joint Mesh Reconstruction and Material Decomposition [0.0]
We propose a unified solution for mesh reconstruction and material decomposition from multi-view images based on 3D Gaussian Splatting.<n>Previous works handle these tasks separately and struggle to reconstruct highly reflective surfaces.<n>Our method addresses these two problems by jointly optimizing attributes relevant to the quality of rendered depth and normals.
arXiv Detail & Related papers (2025-09-26T12:43:33Z) - RelitLRM: Generative Relightable Radiance for Large Reconstruction Models [52.672706620003765]
We propose RelitLRM for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations.
Unlike prior inverse rendering methods requiring dense captures and slow optimization, RelitLRM adopts a feed-forward transformer-based model.
We show our sparse-view feed-forward RelitLRM offers competitive relighting results to state-of-the-art dense-view optimization-based baselines.
arXiv Detail & Related papers (2024-10-08T17:40:01Z) - Inverse Rendering using Multi-Bounce Path Tracing and Reservoir Sampling [17.435649250309904]
We present MIRReS, a novel two-stage inverse rendering framework.<n>Our method extracts an explicit geometry (triangular mesh) in stage one, and introduces a more realistic physically-based inverse rendering model.<n>Our method effectively estimates indirect illumination, including self-shadowing and internal reflections.
arXiv Detail & Related papers (2024-06-24T07:00:57Z) - IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination [37.96484120807323]
This paper aims to recover object materials from posed images captured under an unknown static lighting condition.
We learn the material prior with a generative model for regularizing the optimization process.
Experiments on real-world and synthetic datasets demonstrate that our approach achieves state-of-the-art performance on material recovery.
arXiv Detail & Related papers (2024-04-17T17:45:08Z) - FOUND: Foot Optimization with Uncertain Normals for Surface Deformation Using Synthetic Data [27.53648027412686]
We seek to develop a method for few-view reconstruction, for the case of the human foot.
To solve this task, we must extract rich geometric cues from RGB images, before carefully fusing them into a final 3D object.
We show that our normal predictor outperforms all off-the-shelf equivalents significantly on real images.
arXiv Detail & Related papers (2023-10-27T17:11:07Z) - Paired Image-to-Image Translation Quality Assessment Using Multi-Method
Fusion [0.0]
This paper proposes a novel approach that combines signals of image quality between paired source and transformation to predict the latter's similarity with a hypothetical ground truth.
We trained a Multi-Method Fusion (MMF) model via an ensemble of gradient-boosted regressors to predict Deep Image Structure and Texture Similarity (DISTS)
Analysis revealed the task to be feature-constrained, introducing a trade-off at inference between metric time and prediction accuracy.
arXiv Detail & Related papers (2022-05-09T11:05:15Z) - IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from
Photometric Images [52.021529273866896]
We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content.
Our method adopts neural representations for geometry as signed distance fields (SDFs) and materials during optimization to enjoy their flexibility and compactness.
We show that our IRON achieves significantly better inverse rendering quality compared to prior works.
arXiv Detail & Related papers (2022-04-05T14:14:18Z) - Differentiable Rendering with Perturbed Optimizers [85.66675707599782]
Reasoning about 3D scenes from their 2D image projections is one of the core problems in computer vision.
Our work highlights the link between some well-known differentiable formulations and randomly smoothed renderings.
We apply our method to 3D scene reconstruction and demonstrate its advantages on the tasks of 6D pose estimation and 3D mesh reconstruction.
arXiv Detail & Related papers (2021-10-18T08:56:23Z) - Riggable 3D Face Reconstruction via In-Network Optimization [58.016067611038046]
This paper presents a method for riggable 3D face reconstruction from monocular images.
It jointly estimates a personalized face rig and per-image parameters including expressions, poses, and illuminations.
Experiments demonstrate that our method achieves SOTA reconstruction accuracy, reasonable robustness and generalization ability.
arXiv Detail & Related papers (2021-04-08T03:53:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.