Neural Light Transport for Relighting and View Synthesis
- URL: http://arxiv.org/abs/2008.03806v3
- Date: Wed, 20 Jan 2021 15:45:52 GMT
- Title: Neural Light Transport for Relighting and View Synthesis
- Authors: Xiuming Zhang, Sean Fanello, Yun-Ta Tsai, Tiancheng Sun, Tianfan Xue,
Rohit Pandey, Sergio Orts-Escolano, Philip Davidson, Christoph Rhemann, Paul
Debevec, Jonathan T. Barron, Ravi Ramamoorthi, William T. Freeman
- Abstract summary: Light transport (LT) of a scene describes how it appears under different lighting and viewing directions.
We propose a semi-parametric approach to learn a neural representation of LT embedded in a texture atlas of known geometric properties.
We show how to fuse previously seen observations of illuminants and views to synthesize a new image of the same scene under a desired lighting condition.
- Score: 70.39907425114302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The light transport (LT) of a scene describes how it appears under different
lighting and viewing directions, and complete knowledge of a scene's LT enables
the synthesis of novel views under arbitrary lighting. In this paper, we focus
on image-based LT acquisition, primarily for human bodies within a light stage
setup. We propose a semi-parametric approach to learn a neural representation
of LT that is embedded in the space of a texture atlas of known geometric
properties, and model all non-diffuse and global LT as residuals added to a
physically-accurate diffuse base rendering. In particular, we show how to fuse
previously seen observations of illuminants and views to synthesize a new image
of the same scene under a desired lighting condition from a chosen viewpoint.
This strategy allows the network to learn complex material effects (such as
subsurface scattering) and global illumination, while guaranteeing the physical
correctness of the diffuse LT (such as hard shadows). With this learned LT, one
can relight the scene photorealistically with a directional light or an HDRI
map, synthesize novel views with view-dependent effects, or do both
simultaneously, all in a unified framework using a set of sparse, previously
seen observations. Qualitative and quantitative experiments demonstrate that
our neural LT (NLT) outperforms state-of-the-art solutions for relighting and
view synthesis, without separate treatment for both problems that prior work
requires.
Related papers
- GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis [63.5925701087252]
We propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points.
Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components.
To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Light Field Neural Rendering [47.7586443731997]
Methods based on geometric reconstruction need only sparse views, but cannot accurately model non-Lambertian effects.
We introduce a model that combines the strengths and mitigates the limitations of these two directions.
Our model outperforms the state-of-the-art on multiple forward-facing and 360deg datasets.
arXiv Detail & Related papers (2021-12-17T18:58:05Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - Neural Ray-Tracing: Learning Surfaces and Reflectance for Relighting and
View Synthesis [28.356700318603565]
We explicitly model the light transport between scene surfaces and we rely on traditional integration schemes and the rendering equation to reconstruct a scene.
By learning decomposed transport with surface representations established in conventional rendering methods, the method naturally facilitates editing shape, reflectance, lighting and scene composition.
We validate the proposed approach for scene editing, relighting and reflectance estimation learned from synthetic and captured views on a subset of NeRV's datasets.
arXiv Detail & Related papers (2021-04-28T03:47:48Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.