Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination
- URL: http://arxiv.org/abs/2207.13607v1
- Date: Wed, 27 Jul 2022 16:07:48 GMT
- Title: Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination
- Authors: Linjie Lyu, Ayush Tewari, Thomas Leimkuehler, Marc Habermann, and
Christian Theobalt
- Abstract summary: We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
- Score: 63.992213016011235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a set of images of a scene, the re-rendering of this scene from novel
views and lighting conditions is an important and challenging problem in
Computer Vision and Graphics. On the one hand, most existing works in Computer
Vision usually impose many assumptions regarding the image formation process,
e.g. direct illumination and predefined materials, to make scene parameter
estimation tractable. On the other hand, mature Computer Graphics tools allow
modeling of complex photo-realistic light transport given all the scene
parameters. Combining these approaches, we propose a method for scene
relighting under novel views by learning a neural precomputed radiance transfer
function, which implicitly handles global illumination effects using novel
environment maps. Our method can be solely supervised on a set of real images
of the scene under a single unknown lighting condition. To disambiguate the
task during training, we tightly integrate a differentiable path tracer in the
training process and propose a combination of a synthesized OLAT and a real
image loss. Results show that the recovered disentanglement of scene parameters
improves significantly over the current state of the art and, thus, also our
re-rendering results are more realistic and accurate.
Related papers
- Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Scene Representation Transformer: Geometry-Free Novel View Synthesis
Through Set-Latent Scene Representations [48.05445941939446]
A classical problem in computer vision is to infer a 3D scene representation from few images that can be used to render novel views at interactive rates.
We propose the Scene Representation Transformer (SRT), a method which processes posed or unposed RGB images of a new area.
We show that this method outperforms recent baselines in terms of PSNR and speed on synthetic datasets.
arXiv Detail & Related papers (2021-11-25T16:18:56Z) - Neural Relightable Participating Media Rendering [26.431106015677]
We learn neural representations for participating media with a complete simulation of global illumination.
Our approach achieves superior visual quality and numerical performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-10-25T14:36:15Z) - Neural Ray-Tracing: Learning Surfaces and Reflectance for Relighting and
View Synthesis [28.356700318603565]
We explicitly model the light transport between scene surfaces and we rely on traditional integration schemes and the rendering equation to reconstruct a scene.
By learning decomposed transport with surface representations established in conventional rendering methods, the method naturally facilitates editing shape, reflectance, lighting and scene composition.
We validate the proposed approach for scene editing, relighting and reflectance estimation learned from synthetic and captured views on a subset of NeRV's datasets.
arXiv Detail & Related papers (2021-04-28T03:47:48Z) - Object-Centric Neural Scene Rendering [19.687759175741824]
We present a method for composing photorealistic scenes from captured images of objects.
Our work builds upon neural radiance fields (NeRFs), which implicitly model the volumetric density and directionally-emitted radiance of a scene.
We learn object-centric neural scattering functions (OSFs), a representation that models per-object light transport implicitly using a lighting- and view-dependent neural network.
arXiv Detail & Related papers (2020-12-15T18:55:02Z) - Neural Scene Graphs for Dynamic Scenes [57.65413768984925]
We present the first neural rendering method that decomposes dynamic scenes into scene graphs.
We learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function.
arXiv Detail & Related papers (2020-11-20T12:37:10Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - Neural Light Transport for Relighting and View Synthesis [70.39907425114302]
Light transport (LT) of a scene describes how it appears under different lighting and viewing directions.
We propose a semi-parametric approach to learn a neural representation of LT embedded in a texture atlas of known geometric properties.
We show how to fuse previously seen observations of illuminants and views to synthesize a new image of the same scene under a desired lighting condition.
arXiv Detail & Related papers (2020-08-09T20:13:15Z) - Deep Reflectance Volumes: Relightable Reconstructions from Multi-View
Photometric Images [59.53382863519189]
We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting.
At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, surface normal and reflectance voxel grids.
We show that our learned reflectance volumes are editable, allowing for modifying the materials of the captured scenes.
arXiv Detail & Related papers (2020-07-20T05:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.