NeILF: Neural Incident Light Field for Physically-based Material
Estimation
- URL: http://arxiv.org/abs/2203.07182v1
- Date: Mon, 14 Mar 2022 15:23:04 GMT
- Title: NeILF: Neural Incident Light Field for Physically-based Material
Estimation
- Authors: Yao Yao, Jingyang Zhang, Jingbo Liu, Yihang Qu, Tian Fang, David
McKinnon, Yanghai Tsin, Long Quan
- Abstract summary: We present a differentiable rendering framework for material and lighting estimation from multi-view images and a reconstructed geometry.
In the framework, we represent scene lightings as the Neural Incident Light Field (NeILF) and material properties as the surface BRDF modelled by multi-layer perceptrons.
- Score: 31.230609753253713
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a differentiable rendering framework for material and lighting
estimation from multi-view images and a reconstructed geometry. In the
framework, we represent scene lightings as the Neural Incident Light Field
(NeILF) and material properties as the surface BRDF modelled by multi-layer
perceptrons. Compared with recent approaches that approximate scene lightings
as the 2D environment map, NeILF is a fully 5D light field that is capable of
modelling illuminations of any static scenes. In addition, occlusions and
indirect lights can be handled naturally by the NeILF representation without
requiring multiple bounces of ray tracing, making it possible to estimate
material properties even for scenes with complex lightings and geometries. We
also propose a smoothness regularization and a Lambertian assumption to reduce
the material-lighting ambiguity during the optimization. Our method strictly
follows the physically-based rendering equation, and jointly optimizes material
and lighting through the differentiable rendering process. We have intensively
evaluated the proposed method on our in-house synthetic dataset, the DTU MVS
dataset, and real-world BlendedMVS scenes. Our method is able to outperform
previous methods by a significant margin in terms of novel view rendering
quality, setting a new state-of-the-art for image-based material and lighting
estimation.
Related papers
- PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - SplitNeRF: Split Sum Approximation Neural Field for Joint Geometry,
Illumination, and Material Estimation [65.99344783327054]
We present a novel approach for digitizing real-world objects by estimating their geometry, material properties, and lighting.
Our method incorporates into Radiance Neural Field (NeRF) pipelines the split sum approximation used with image-based lighting for real-time physical-based rendering.
Our method is capable of attaining state-of-the-art relighting quality after only $sim1$ hour of training in a single NVIDIA A100 GPU.
arXiv Detail & Related papers (2023-11-28T10:36:36Z) - NePF: Neural Photon Field for Single-Stage Inverse Rendering [6.977356702921476]
We present a novel single-stage framework, Neural Photon Field (NePF), to address the ill-posed inverse rendering from multi-view images.
NePF achieves this unification by fully utilizing the physical implication behind the weight function of neural implicit surfaces.
We evaluate our method on both real and synthetic datasets.
arXiv Detail & Related papers (2023-11-20T06:15:46Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination [28.433403714053103]
We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
arXiv Detail & Related papers (2023-04-18T06:32:30Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - NeILF++: Inter-Reflectable Light Fields for Geometry and Material
Estimation [36.09503501647977]
We formulate the lighting of a static scene as one neural incident light field (NeILF) and one outgoing neural radiance field (NeRF)
The proposed method is able to achieve state-of-the-art results in terms of geometry reconstruction quality, material estimation accuracy, and the fidelity of novel view rendering.
arXiv Detail & Related papers (2023-03-30T04:59:48Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - Physics-based Indirect Illumination for Inverse Rendering [70.27534648770057]
We present a physics-based inverse rendering method that learns the illumination, geometry, and materials of a scene from posed multi-view RGB images.
As a side product, our physics-based inverse rendering model also facilitates flexible and realistic material editing as well as relighting.
arXiv Detail & Related papers (2022-12-09T07:33:49Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.