Factorized Inverse Path Tracing for Efficient and Accurate
Material-Lighting Estimation
- URL: http://arxiv.org/abs/2304.05669v2
- Date: Wed, 23 Aug 2023 20:52:27 GMT
- Title: Factorized Inverse Path Tracing for Efficient and Accurate
Material-Lighting Estimation
- Authors: Liwen Wu, Rui Zhu, Mustafa B. Yaldiz, Yinhao Zhu, Hong Cai, Janarbek
Matai, Fatih Porikli, Tzu-Mao Li, Manmohan Chandraker, Ravi Ramamoorthi
- Abstract summary: Inverse path tracing is expensive to compute, and ambiguities exist between reflection and emission.
Our Factorized Inverse Path Tracing (FIPT) addresses these challenges by using a factored light transport formulation.
Our algorithm enables accurate material and lighting optimization faster than previous work, and is more effective at resolving ambiguities.
- Score: 97.0195314255101
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inverse path tracing has recently been applied to joint material and lighting
estimation, given geometry and multi-view HDR observations of an indoor scene.
However, it has two major limitations: path tracing is expensive to compute,
and ambiguities exist between reflection and emission. Our Factorized Inverse
Path Tracing (FIPT) addresses these challenges by using a factored light
transport formulation and finds emitters driven by rendering errors. Our
algorithm enables accurate material and lighting optimization faster than
previous work, and is more effective at resolving ambiguities. The exhaustive
experiments on synthetic scenes show that our method (1) outperforms
state-of-the-art indoor inverse rendering and relighting methods particularly
in the presence of complex illumination effects; (2) speeds up inverse path
tracing optimization to less than an hour. We further demonstrate robustness to
noisy inputs through material and lighting estimates that allow plausible
relighting in a real scene. The source code is available at:
https://github.com/lwwu2/fipt
Related papers
- MIRReS: Multi-bounce Inverse Rendering using Reservoir Sampling [17.435649250309904]
We present MIRReS, a novel two-stage inverse rendering framework.
Our method extracts an explicit geometry (triangular mesh) in stage one, and introduces a more realistic physically-based inverse rendering model.
Our method effectively estimates indirect illumination, including self-shadowing and internal reflections.
arXiv Detail & Related papers (2024-06-24T07:00:57Z) - SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and
Illumination Removal in High-Illuminance Scenes [51.50157919750782]
We present SIRe-IR, an implicit neural rendering inverse approach that decomposes the scene into environment map, albedo, and roughness.
By accurately modeling the indirect radiance field, normal, visibility, and direct light simultaneously, we are able to remove both shadows and indirect illumination.
Even in the presence of intense illumination, our method recovers high-quality albedo and roughness with no shadow interference.
arXiv Detail & Related papers (2023-10-19T10:44:23Z) - Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering [63.24476194987721]
Inverse rendering, the process of inferring scene properties from images, is a challenging inverse problem.
Most existing solutions incorporate priors into the inverse-rendering pipeline to encourage plausible solutions.
We propose a novel scheme that integrates a denoising probabilistic diffusion model pre-trained on natural illumination maps into an optimization framework.
arXiv Detail & Related papers (2023-09-30T12:39:28Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - Modeling Indirect Illumination for Inverse Rendering [31.734819333921642]
In this paper, we propose a novel approach to efficiently recovering spatially-varying indirect illumination.
The key insight is that indirect illumination can be conveniently derived from the neural radiance field learned from input images.
Experiments on both synthetic and real data demonstrate the superior performance of our approach compared to previous work.
arXiv Detail & Related papers (2022-04-14T09:10:55Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Accelerating Inverse Rendering By Using a GPU and Reuse of Light Paths [14.213973379473652]
Inverse rendering seeks to estimate scene characteristics from a set of data images.
Algorithms as such usually rely on a forward model and use an iterative gradient method that requires sampling millions of light paths per iteration.
This is achieved by tailoring the iterative process of inverse rendering specifically to a GPU architecture.
arXiv Detail & Related papers (2021-09-30T20:53:08Z) - Efficient and Differentiable Shadow Computation for Inverse Problems [64.70468076488419]
Differentiable geometric computation has received increasing interest for image-based inverse problems.
We propose an efficient yet efficient approach for differentiable visibility and soft shadow computation.
As our formulation is differentiable, it can be used to solve inverse problems such as texture, illumination, rigid pose, and deformation recovery from images.
arXiv Detail & Related papers (2021-04-01T09:29:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.