Free-viewpoint Indoor Neural Relighting from Multi-view Stereo
- URL: http://arxiv.org/abs/2106.13299v1
- Date: Thu, 24 Jun 2021 20:09:40 GMT
- Title: Free-viewpoint Indoor Neural Relighting from Multi-view Stereo
- Authors: Julien Philip and S\'ebastien Morgenthaler and Micha\"el Gharbi and
George Drettakis
- Abstract summary: We introduce a neural relighting algorithm for captured indoors scenes, that allows interactive free-viewpoint navigation.
Our method allows illumination to be changed synthetically, while coherently rendering cast shadows and complex glossy materials.
- Score: 5.306819482496464
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a neural relighting algorithm for captured indoors scenes, that
allows interactive free-viewpoint navigation. Our method allows illumination to
be changed synthetically, while coherently rendering cast shadows and complex
glossy materials. We start with multiple images of the scene and a 3D mesh
obtained by multi-view stereo (MVS) reconstruction. We assume that lighting is
well-explained as the sum of a view-independent diffuse component and a
view-dependent glossy term concentrated around the mirror reflection direction.
We design a convolutional network around input feature maps that facilitate
learning of an implicit representation of scene materials and illumination,
enabling both relighting and free-viewpoint navigation. We generate these input
maps by exploiting the best elements of both image-based and physically-based
rendering. We sample the input views to estimate diffuse scene irradiance, and
compute the new illumination caused by user-specified light sources using path
tracing. To facilitate the network's understanding of materials and synthesize
plausible glossy reflections, we reproject the views and compute mirror images.
We train the network on a synthetic dataset where each scene is also
reconstructed with MVS. We show results of our algorithm relighting real indoor
scenes and performing free-viewpoint navigation with complex and realistic
glossy reflections, which so far remained out of reach for view-synthesis
techniques.
Related papers
- Learning-based Inverse Rendering of Complex Indoor Scenes with
Differentiable Monte Carlo Raytracing [27.96634370355241]
This work presents an end-to-end, learning-based inverse rendering framework incorporating differentiable Monte Carlo raytracing with importance sampling.
The framework takes a single image as input to jointly recover the underlying geometry, spatially-varying lighting, and photorealistic materials.
arXiv Detail & Related papers (2022-11-06T03:34:26Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - A New Dimension in Testimony: Relighting Video with Reflectance Field
Exemplars [1.069384486725302]
We present a learning-based method for estimating 4D reflectance field of a person given video footage illuminated under a flat-lit environment of the same subject.
We estimate the lighting environment of the input video footage and use the subject's reflectance field to create synthetic images of the subject illuminated by the input lighting environment.
We evaluate our method on the video footage of the real Holocaust survivors and show that our method outperforms the state-of-the-art methods in both realism and speed.
arXiv Detail & Related papers (2021-04-06T20:29:06Z) - IBRNet: Learning Multi-View Image-Based Rendering [67.15887251196894]
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views.
By drawing on source views at render time, our method hearkens back to classic work on image-based rendering.
arXiv Detail & Related papers (2021-02-25T18:56:21Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - Deep Reflectance Volumes: Relightable Reconstructions from Multi-View
Photometric Images [59.53382863519189]
We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting.
At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, surface normal and reflectance voxel grids.
We show that our learned reflectance volumes are editable, allowing for modifying the materials of the captured scenes.
arXiv Detail & Related papers (2020-07-20T05:38:11Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.