NeRD: Neural Reflectance Decomposition from Image Collections
- URL: http://arxiv.org/abs/2012.03918v2
- Date: Tue, 8 Dec 2020 15:48:18 GMT
- Title: NeRD: Neural Reflectance Decomposition from Image Collections
- Authors: Mark Boss, Raphael Braun, Varun Jampani, Jonathan T. Barron, Ce Liu,
Hendrik P.A. Lensch
- Abstract summary: NeRD is a method that achieves this decomposition by introducing physically-based rendering to neural radiance fields.
Even challenging non-Lambertian reflectances, complex geometry, and unknown illumination can be decomposed to high-quality models.
- Score: 50.945357655498185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decomposing a scene into its shape, reflectance, and illumination is a
challenging but essential problem in computer vision and graphics. This problem
is inherently more challenging when the illumination is not a single light
source under laboratory conditions but is instead an unconstrained
environmental illumination. Though recent work has shown that implicit
representations can be used to model the radiance field of an object, these
techniques only enable view synthesis and not relighting. Additionally,
evaluating these radiance fields is resource and time-intensive. By decomposing
a scene into explicit representations, any rendering framework can be leveraged
to generate novel views under any illumination in real-time. NeRD is a method
that achieves this decomposition by introducing physically-based rendering to
neural radiance fields. Even challenging non-Lambertian reflectances, complex
geometry, and unknown illumination can be decomposed to high-quality models.
The datasets and code is available at the project page:
https://markboss.me/publication/2021-nerd/
Related papers
- IllumiNeRF: 3D Relighting Without Inverse Rendering [25.642960820693947]
We show how to relight each input image using an image diffusion model conditioned on target environment lighting and estimated object geometry.
We reconstruct a Neural Radiance Field (NeRF) with these relit images, from which we render novel views under the target lighting.
We demonstrate that this strategy is surprisingly competitive and achieves state-of-the-art results on multiple relighting benchmarks.
arXiv Detail & Related papers (2024-06-10T17:59:59Z) - SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary
Image collections [49.3480550339732]
Inverse rendering of an object under entirely unknown capture conditions is a fundamental challenge in computer vision and graphics.
We propose a joint optimization framework to estimate the shape, BRDF, and per-image camera pose and illumination.
Our method works on in-the-wild online image collections of an object and produces relightable 3D assets for several use-cases such as AR/VR.
arXiv Detail & Related papers (2022-05-31T13:16:48Z) - Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition [50.94535765549819]
Decomposing a scene into its shape, reflectance and illumination is a fundamental problem in computer vision and graphics.
We propose a novel reflectance decomposition network that can estimate shape, BRDF, and per-image illumination.
Our decompositions can result in considerably better BRDF and light estimates enabling more accurate novel view-synthesis and relighting.
arXiv Detail & Related papers (2021-10-27T12:17:47Z) - Neural Relightable Participating Media Rendering [26.431106015677]
We learn neural representations for participating media with a complete simulation of global illumination.
Our approach achieves superior visual quality and numerical performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-10-25T14:36:15Z) - NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild [80.09093712055682]
We introduce a surface analog of implicit models called Neural Reflectance Surfaces (NeRS)
NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
arXiv Detail & Related papers (2021-10-14T17:59:58Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.