NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination
- URL: http://arxiv.org/abs/2106.01970v1
- Date: Thu, 3 Jun 2021 16:18:01 GMT
- Title: NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination
- Authors: Xiuming Zhang, Pratul P. Srinivasan, Boyang Deng, Paul Debevec,
William T. Freeman, Jonathan T. Barron
- Abstract summary: We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
- Score: 60.89737319987051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the problem of recovering the shape and spatially-varying
reflectance of an object from posed multi-view images of the object illuminated
by one unknown lighting condition. This enables the rendering of novel views of
the object under arbitrary environment lighting and editing of the object's
material properties. The key to our approach, which we call Neural Radiance
Factorization (NeRFactor), is to distill the volumetric geometry of a Neural
Radiance Field (NeRF) [Mildenhall et al. 2020] representation of the object
into a surface representation and then jointly refine the geometry while
solving for the spatially-varying reflectance and the environment lighting.
Specifically, NeRFactor recovers 3D neural fields of surface normals, light
visibility, albedo, and Bidirectional Reflectance Distribution Functions
(BRDFs) without any supervision, using only a re-rendering loss, simple
smoothness priors, and a data-driven BRDF prior learned from real-world BRDF
measurements. By explicitly modeling light visibility, NeRFactor is able to
separate shadows from albedo and synthesize realistic soft or hard shadows
under arbitrary lighting conditions. NeRFactor is able to recover convincing 3D
models for free-viewpoint relighting in this challenging and underconstrained
capture setup for both synthetic and real scenes. Qualitative and quantitative
experiments show that NeRFactor outperforms classic and deep learning-based
state of the art across various tasks. Our code and data are available at
people.csail.mit.edu/xiuming/projects/nerfactor/.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Inverse Rendering of Glossy Objects via the Neural Plenoptic Function and Radiance Fields [45.64333510966844]
Inverse rendering aims at recovering both geometry and materials of objects.
We propose a novel 5D Neural Plenoptic Function (NeP) based on NeRFs and ray tracing.
Our method can reconstruct high-fidelity geometry/materials of challenging glossy objects with complex lighting interactions from nearby objects.
arXiv Detail & Related papers (2024-03-24T16:34:47Z) - NeRRF: 3D Reconstruction and View Synthesis for Transparent and Specular
Objects with Neural Refractive-Reflective Fields [23.099784003061618]
We introduce the refractive-reflective field to Neural radiance fields (NeRF)
NeRF uses straight rays and fails to deal with complicated light path changes caused by refraction and reflection.
We propose a virtual cone supersampling technique to achieve efficient and effective anti-aliasing.
arXiv Detail & Related papers (2023-09-22T17:59:12Z) - Neural Relighting with Subsurface Scattering by Learning the Radiance
Transfer Gradient [73.52585139592398]
We propose a novel framework for learning the radiance transfer field via volume rendering.
We will release our code and a novel light stage dataset of objects with subsurface scattering effects publicly available.
arXiv Detail & Related papers (2023-06-15T17:56:04Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - NeRFReN: Neural Radiance Fields with Reflections [16.28256369376256]
We introduce NeRFReN, which is built upon NeRF to model scenes with reflections.
We propose to split a scene into transmitted and reflected components, and model the two components with separate neural radiance fields.
Experiments on various self-captured scenes show that our method achieves high-quality novel view synthesis and physically sound depth estimation results.
arXiv Detail & Related papers (2021-11-30T09:36:00Z) - NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild [80.09093712055682]
We introduce a surface analog of implicit models called Neural Reflectance Surfaces (NeRS)
NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
arXiv Detail & Related papers (2021-10-14T17:59:58Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.