NeRV: Neural Reflectance and Visibility Fields for Relighting and View
Synthesis
- URL: http://arxiv.org/abs/2012.03927v1
- Date: Mon, 7 Dec 2020 18:56:08 GMT
- Title: NeRV: Neural Reflectance and Visibility Fields for Relighting and View
Synthesis
- Authors: Pratul P. Srinivasan and Boyang Deng and Xiuming Zhang and Matthew
Tancik and Ben Mildenhall and Jonathan T. Barron
- Abstract summary: We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting.
This produces a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions.
- Score: 45.71507069571216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a method that takes as input a set of images of a scene
illuminated by unconstrained known lighting, and produces as output a 3D
representation that can be rendered from novel viewpoints under arbitrary
lighting conditions. Our method represents the scene as a continuous volumetric
function parameterized as MLPs whose inputs are a 3D location and whose outputs
are the following scene properties at that input location: volume density,
surface normal, material parameters, distance to the first surface intersection
in any direction, and visibility of the external environment in any direction.
Together, these allow us to render novel views of the object under arbitrary
lighting, including indirect illumination effects. The predicted visibility and
surface intersection fields are critical to our model's ability to simulate
direct and indirect illumination during training, because the brute-force
techniques used by prior work are intractable for lighting conditions outside
of controlled setups with a single light. Our method outperforms alternative
approaches for recovering relightable 3D scene representations, and performs
well in complex lighting settings that have posed a significant challenge to
prior work.
Related papers
- GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis [63.5925701087252]
We propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points.
Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components.
To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - Factored-NeuS: Reconstructing Surfaces, Illumination, and Materials of
Possibly Glossy Objects [46.04357263321969]
We develop a method that recovers the surface, materials, and illumination of a scene from its posed multi-view images.
It does not require any additional data and can handle glossy objects or bright lighting.
arXiv Detail & Related papers (2023-05-29T07:44:19Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - NeILF: Neural Incident Light Field for Physically-based Material
Estimation [31.230609753253713]
We present a differentiable rendering framework for material and lighting estimation from multi-view images and a reconstructed geometry.
In the framework, we represent scene lightings as the Neural Incident Light Field (NeILF) and material properties as the surface BRDF modelled by multi-layer perceptrons.
arXiv Detail & Related papers (2022-03-14T15:23:04Z) - Neural Ray-Tracing: Learning Surfaces and Reflectance for Relighting and
View Synthesis [28.356700318603565]
We explicitly model the light transport between scene surfaces and we rely on traditional integration schemes and the rendering equation to reconstruct a scene.
By learning decomposed transport with surface representations established in conventional rendering methods, the method naturally facilitates editing shape, reflectance, lighting and scene composition.
We validate the proposed approach for scene editing, relighting and reflectance estimation learned from synthetic and captured views on a subset of NeRV's datasets.
arXiv Detail & Related papers (2021-04-28T03:47:48Z) - Lighthouse: Predicting Lighting Volumes for Spatially-Coherent
Illumination [84.00096195633793]
We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.
Our model is trained without any ground truth 3D data and only requires a held-out perspective view near the input stereo pair and a spherical panorama taken within each scene as supervision.
We demonstrate that our method can predict consistent spatially-varying lighting that is convincing enough to plausibly relight and insert highly specular virtual objects into real images.
arXiv Detail & Related papers (2020-03-18T17:46:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.