SIR: Multi-view Inverse Rendering with Decomposable Shadow for Indoor Scenes
- URL: http://arxiv.org/abs/2402.06136v3
- Date: Tue, 9 Apr 2024 02:38:16 GMT
- Title: SIR: Multi-view Inverse Rendering with Decomposable Shadow for Indoor Scenes
- Authors: Xiaokang Wei, Zhuoman Liu, Yan Luximon,
- Abstract summary: We propose SIR, an efficient method to decompose differentiable shadows for inverse rendering on indoor scenes using multi-view data.
SIR explicitly learns shadows for enhanced realism in material estimation under unknown light positions.
The significant decomposing ability of SIR enables sophisticated editing capabilities like free-view relighting, object insertion, and material replacement.
- Score: 0.88756501225368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose SIR, an efficient method to decompose differentiable shadows for inverse rendering on indoor scenes using multi-view data, addressing the challenges in accurately decomposing the materials and lighting conditions. Unlike previous methods that struggle with shadow fidelity in complex lighting environments, our approach explicitly learns shadows for enhanced realism in material estimation under unknown light positions. Utilizing posed HDR images as input, SIR employs an SDF-based neural radiance field for comprehensive scene representation. Then, SIR integrates a shadow term with a three-stage material estimation approach to improve SVBRDF quality. Specifically, SIR is designed to learn a differentiable shadow, complemented by BRDF regularization, to optimize inverse rendering accuracy. Extensive experiments on both synthetic and real-world indoor scenes demonstrate the superior performance of SIR over existing methods in both quantitative metrics and qualitative analysis. The significant decomposing ability of SIR enables sophisticated editing capabilities like free-view relighting, object insertion, and material replacement. The code and data are available at https://xiaokangwei.github.io/SIR/.
Related papers
- PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - LumiGauss: High-Fidelity Outdoor Relighting with 2D Gaussian Splatting [15.11759492990967]
We introduce LumiGauss, a technique that tackles 3D reconstruction of scenes and environmental lighting through 2D Gaussian Splatting.
Our approach yields high-quality scene reconstructions and enables realistic lighting synthesis under novel environment maps.
arXiv Detail & Related papers (2024-08-06T23:41:57Z) - SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and
Illumination Removal in High-Illuminance Scenes [51.50157919750782]
We present SIRe-IR, an implicit neural rendering inverse approach that decomposes the scene into environment map, albedo, and roughness.
By accurately modeling the indirect radiance field, normal, visibility, and direct light simultaneously, we are able to remove both shadows and indirect illumination.
Even in the presence of intense illumination, our method recovers high-quality albedo and roughness with no shadow interference.
arXiv Detail & Related papers (2023-10-19T10:44:23Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z) - NeILF: Neural Incident Light Field for Physically-based Material
Estimation [31.230609753253713]
We present a differentiable rendering framework for material and lighting estimation from multi-view images and a reconstructed geometry.
In the framework, we represent scene lightings as the Neural Incident Light Field (NeILF) and material properties as the surface BRDF modelled by multi-layer perceptrons.
arXiv Detail & Related papers (2022-03-14T15:23:04Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.