Deep Reflectance Volumes: Relightable Reconstructions from Multi-View
Photometric Images
- URL: http://arxiv.org/abs/2007.09892v1
- Date: Mon, 20 Jul 2020 05:38:11 GMT
- Title: Deep Reflectance Volumes: Relightable Reconstructions from Multi-View
Photometric Images
- Authors: Sai Bi, Zexiang Xu, Kalyan Sunkavalli, Milo\v{s} Ha\v{s}an, Yannick
Hold-Geoffroy, David Kriegman, Ravi Ramamoorthi
- Abstract summary: We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting.
At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, surface normal and reflectance voxel grids.
We show that our learned reflectance volumes are editable, allowing for modifying the materials of the captured scenes.
- Score: 59.53382863519189
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a deep learning approach to reconstruct scene appearance from
unstructured images captured under collocated point lighting. At the heart of
Deep Reflectance Volumes is a novel volumetric scene representation consisting
of opacity, surface normal and reflectance voxel grids. We present a novel
physically-based differentiable volume ray marching framework to render these
scene volumes under arbitrary viewpoint and lighting. This allows us to
optimize the scene volumes to minimize the error between their rendered images
and the captured images. Our method is able to reconstruct real scenes with
challenging non-Lambertian reflectance and complex geometry with occlusions and
shadowing. Moreover, it accurately generalizes to novel viewpoints and
lighting, including non-collocated lighting, rendering photorealistic images
that are significantly better than state-of-the-art mesh-based methods. We also
show that our learned reflectance volumes are editable, allowing for modifying
the materials of the captured scenes.
Related papers
- Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Neural Point Catacaustics for Novel-View Synthesis of Reflections [3.5348690973777]
We introduce a new point-based representation to compute Neural Point Catacaustics allowing novel-view synthesis of scenes with curved reflectors.
We provide the source code and other supplemental material on https://repo-sam.inria.fr/ fungraph/neural_catacaustics/.
arXiv Detail & Related papers (2023-01-03T13:28:10Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Free-viewpoint Indoor Neural Relighting from Multi-view Stereo [5.306819482496464]
We introduce a neural relighting algorithm for captured indoors scenes, that allows interactive free-viewpoint navigation.
Our method allows illumination to be changed synthetically, while coherently rendering cast shadows and complex glossy materials.
arXiv Detail & Related papers (2021-06-24T20:09:40Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.