Neural Free-Viewpoint Relighting for Glossy Indirect Illumination
- URL: http://arxiv.org/abs/2307.06335v1
- Date: Wed, 12 Jul 2023 17:56:09 GMT
- Title: Neural Free-Viewpoint Relighting for Glossy Indirect Illumination
- Authors: Nithin Raghavan, Yan Xiao, Kai-En Lin, Tiancheng Sun, Sai Bi, Zexiang
Xu, Tzu-Mao Li, Ravi Ramamoorthi
- Abstract summary: We show a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view.
We demonstrate real-time rendering of challenging scenes involving view-dependent reflections and even caustics.
- Score: 44.32630651762033
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Precomputed Radiance Transfer (PRT) remains an attractive solution for
real-time rendering of complex light transport effects such as glossy global
illumination. After precomputation, we can relight the scene with new
environment maps while changing viewpoint in real-time. However, practical PRT
methods are usually limited to low-frequency spherical harmonic lighting.
All-frequency techniques using wavelets are promising but have so far had
little practical impact. The curse of dimensionality and much higher data
requirements have typically limited them to relighting with fixed view or only
direct lighting with triple product integrals. In this paper, we demonstrate a
hybrid neural-wavelet PRT solution to high-frequency indirect illumination,
including glossy reflection, for relighting with changing view. Specifically,
we seek to represent the light transport function in the Haar wavelet basis.
For global illumination, we learn the wavelet transport using a small
multi-layer perceptron (MLP) applied to a feature field as a function of
spatial location and wavelet index, with reflected direction and material
parameters being other MLP inputs. We optimize/learn the feature field
(compactly represented by a tensor decomposition) and MLP parameters from
multiple images of the scene under different lighting and viewing conditions.
We demonstrate real-time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed
rendering of challenging scenes involving view-dependent reflections and even
caustics.
Related papers
- Flying with Photons: Rendering Novel Views of Propagating Light [37.06220870989172]
We present an imaging and neural rendering technique that seeks to synthesize videos of light propagating through a scene from novel, moving camera viewpoints.
Our approach relies on a new ultrafast imaging setup to capture a first-of-its kind, multi-viewpoint video dataset with pico-second-level temporal resolution.
arXiv Detail & Related papers (2024-04-09T17:48:52Z) - LightSpeed: Light and Fast Neural Light Fields on Mobile Devices [29.080086014074613]
Real-time novel-view image synthesis on mobile devices is prohibitive due to the limited computational power and storage.
Recent advances in neural light field representations have shown promising real-time view synthesis results on mobile devices.
arXiv Detail & Related papers (2023-10-25T17:59:05Z) - Self-Calibrating, Fully Differentiable NLOS Inverse Rendering [15.624750787186803]
Time-resolved non-line-of-sight (NLOS) imaging methods reconstruct hidden scenes by inverting the optical paths of indirect illumination measured at visible relay surfaces.
We introduce a fully-differentiable end-to-end NLOS inverse rendering pipeline that self-calibrates the imaging parameters during the reconstruction of hidden scenes.
We demonstrate the robustness of our method to consistently reconstruct geometry and albedo, even under significant noise levels.
arXiv Detail & Related papers (2023-09-21T13:15:54Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Learning Neural Transmittance for Efficient Rendering of Reflectance
Fields [43.24427791156121]
We propose a novel method based on precomputed Neural Transmittance Functions to accelerate rendering of neural reflectance fields.
Results on real and synthetic scenes demonstrate almost two order of magnitude speedup for renderings under environment maps with minimal accuracy loss.
arXiv Detail & Related papers (2021-10-25T21:12:25Z) - Sparse Needlets for Lighting Estimation with Spherical Transport Loss [89.52531416604774]
NeedleLight is a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly.
Extensive experiments show that NeedleLight achieves superior lighting estimation consistently across multiple evaluation metrics as compared with state-of-the-art methods.
arXiv Detail & Related papers (2021-06-24T15:19:42Z) - PhySG: Inverse Rendering with Spherical Gaussians for Physics-based
Material Editing and Relighting [60.75436852495868]
We present PhySG, an inverse rendering pipeline that reconstructs geometry, materials, and illumination from scratch from RGB input images.
We demonstrate, with both synthetic and real data, that our reconstructions not only enable rendering of novel viewpoints, but also physics-based appearance editing of materials and illumination.
arXiv Detail & Related papers (2021-04-01T17:59:02Z) - NeRV: Neural Reflectance and Visibility Fields for Relighting and View
Synthesis [45.71507069571216]
We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting.
This produces a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions.
arXiv Detail & Related papers (2020-12-07T18:56:08Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z) - Crowdsampling the Plenoptic Function [56.10020793913216]
We present a new approach to novel view synthesis under time-varying illumination from such data.
We introduce a new DeepMPI representation, motivated by observations on the sparsity structure of the plenoptic function.
Our method can synthesize the same compelling parallax and view-dependent effects as previous MPI methods, while simultaneously interpolating along changes in reflectance and illumination with time.
arXiv Detail & Related papers (2020-07-30T02:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.