Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition
- URL: http://arxiv.org/abs/2110.14373v1
- Date: Wed, 27 Oct 2021 12:17:47 GMT
- Title: Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition
- Authors: Mark Boss, Varun Jampani, Raphael Braun, Ce Liu, Jonathan T. Barron,
Hendrik P.A. Lensch
- Abstract summary: Decomposing a scene into its shape, reflectance and illumination is a fundamental problem in computer vision and graphics.
We propose a novel reflectance decomposition network that can estimate shape, BRDF, and per-image illumination.
Our decompositions can result in considerably better BRDF and light estimates enabling more accurate novel view-synthesis and relighting.
- Score: 50.94535765549819
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decomposing a scene into its shape, reflectance and illumination is a
fundamental problem in computer vision and graphics. Neural approaches such as
NeRF have achieved remarkable success in view synthesis, but do not explicitly
perform decomposition and instead operate exclusively on radiance (the product
of reflectance and illumination). Extensions to NeRF, such as NeRD, can perform
decomposition but struggle to accurately recover detailed illumination, thereby
significantly limiting realism. We propose a novel reflectance decomposition
network that can estimate shape, BRDF, and per-image illumination given a set
of object images captured under varying illumination. Our key technique is a
novel illumination integration network called Neural-PIL that replaces a costly
illumination integral operation in the rendering with a simple network query.
In addition, we also learn deep low-dimensional priors on BRDF and illumination
representations using novel smooth manifold auto-encoders. Our decompositions
can result in considerably better BRDF and light estimates enabling more
accurate novel view-synthesis and relighting compared to prior art. Project
page: https://markboss.me/publication/2021-neural-pil/
Related papers
- Neural Gaffer: Relighting Any Object via Diffusion [43.87941408722868]
We propose a novel end-to-end 2D relighting diffusion model, called Neural Gaffer.
Our model takes a single image of any object and can synthesize an accurate, high-quality relit image under any novel lighting condition.
We evaluate our model on both synthetic and in-the-wild Internet imagery and demonstrate its advantages in terms of generalization and accuracy.
arXiv Detail & Related papers (2024-06-11T17:50:15Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination [28.433403714053103]
We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
arXiv Detail & Related papers (2023-04-18T06:32:30Z) - Modeling Indirect Illumination for Inverse Rendering [31.734819333921642]
In this paper, we propose a novel approach to efficiently recovering spatially-varying indirect illumination.
The key insight is that indirect illumination can be conveniently derived from the neural radiance field learned from input images.
Experiments on both synthetic and real data demonstrate the superior performance of our approach compared to previous work.
arXiv Detail & Related papers (2022-04-14T09:10:55Z) - NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild [80.09093712055682]
We introduce a surface analog of implicit models called Neural Reflectance Surfaces (NeRS)
NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
arXiv Detail & Related papers (2021-10-14T17:59:58Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z) - NeRD: Neural Reflectance Decomposition from Image Collections [50.945357655498185]
NeRD is a method that achieves this decomposition by introducing physically-based rendering to neural radiance fields.
Even challenging non-Lambertian reflectances, complex geometry, and unknown illumination can be decomposed to high-quality models.
arXiv Detail & Related papers (2020-12-07T18:45:57Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.