Neural Reflectance Fields for Appearance Acquisition
- URL: http://arxiv.org/abs/2008.03824v2
- Date: Sun, 16 Aug 2020 08:39:07 GMT
- Title: Neural Reflectance Fields for Appearance Acquisition
- Authors: Sai Bi, Zexiang Xu, Pratul Srinivasan, Ben Mildenhall, Kalyan
Sunkavalli, Milo\v{s} Ha\v{s}an, Yannick Hold-Geoffroy, David Kriegman, Ravi
Ramamoorthi
- Abstract summary: We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
- Score: 61.542001266380375
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Neural Reflectance Fields, a novel deep scene representation that
encodes volume density, normal and reflectance properties at any 3D point in a
scene using a fully-connected neural network. We combine this representation
with a physically-based differentiable ray marching framework that can render
images from a neural reflectance field under any viewpoint and light. We
demonstrate that neural reflectance fields can be estimated from images
captured with a simple collocated camera-light setup, and accurately model the
appearance of real-world scenes with complex geometry and reflectance. Once
estimated, they can be used to render photo-realistic images under novel
viewpoint and (non-collocated) lighting conditions and accurately reproduce
challenging effects like specularities, shadows and occlusions. This allows us
to perform high-quality view synthesis and relighting that is significantly
better than previous methods. We also demonstrate that we can compose the
estimated neural reflectance field of a real scene with traditional scene
models and render them using standard Monte Carlo rendering engines. Our work
thus enables a complete pipeline from high-quality and practical appearance
acquisition to 3D scene composition and rendering.
Related papers
- Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - ENVIDR: Implicit Differentiable Renderer with Neural Environment
Lighting [9.145875902703345]
We introduce ENVIDR, a rendering and modeling framework for high-quality rendering and reconstruction of surfaces with challenging specular reflections.
We first propose a novel neural with decomposed rendering to learn the interaction between surface and environment lighting.
We then propose an SDF-based neural surface model that leverages this learned neural to represent general scenes.
arXiv Detail & Related papers (2023-03-23T04:12:07Z) - Neural Point Catacaustics for Novel-View Synthesis of Reflections [3.5348690973777]
We introduce a new point-based representation to compute Neural Point Catacaustics allowing novel-view synthesis of scenes with curved reflectors.
We provide the source code and other supplemental material on https://repo-sam.inria.fr/ fungraph/neural_catacaustics/.
arXiv Detail & Related papers (2023-01-03T13:28:10Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild [80.09093712055682]
We introduce a surface analog of implicit models called Neural Reflectance Surfaces (NeRS)
NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
arXiv Detail & Related papers (2021-10-14T17:59:58Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - Object-Centric Neural Scene Rendering [19.687759175741824]
We present a method for composing photorealistic scenes from captured images of objects.
Our work builds upon neural radiance fields (NeRFs), which implicitly model the volumetric density and directionally-emitted radiance of a scene.
We learn object-centric neural scattering functions (OSFs), a representation that models per-object light transport implicitly using a lighting- and view-dependent neural network.
arXiv Detail & Related papers (2020-12-15T18:55:02Z) - Deep Reflectance Volumes: Relightable Reconstructions from Multi-View
Photometric Images [59.53382863519189]
We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting.
At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, surface normal and reflectance voxel grids.
We show that our learned reflectance volumes are editable, allowing for modifying the materials of the captured scenes.
arXiv Detail & Related papers (2020-07-20T05:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.