Neural Relightable Participating Media Rendering
- URL: http://arxiv.org/abs/2110.12993v1
- Date: Mon, 25 Oct 2021 14:36:15 GMT
- Title: Neural Relightable Participating Media Rendering
- Authors: Quan Zheng, Gurprit Singh, Hans-Peter Seidel
- Abstract summary: We learn neural representations for participating media with a complete simulation of global illumination.
Our approach achieves superior visual quality and numerical performance compared to state-of-the-art methods.
- Score: 26.431106015677
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning neural radiance fields of a scene has recently allowed realistic
novel view synthesis of the scene, but they are limited to synthesize images
under the original fixed lighting condition. Therefore, they are not flexible
for the eagerly desired tasks like relighting, scene editing and scene
composition. To tackle this problem, several recent methods propose to
disentangle reflectance and illumination from the radiance field. These methods
can cope with solid objects with opaque surfaces but participating media are
neglected. Also, they take into account only direct illumination or at most
one-bounce indirect illumination, thus suffer from energy loss due to ignoring
the high-order indirect illumination. We propose to learn neural
representations for participating media with a complete simulation of global
illumination. We estimate direct illumination via ray tracing and compute
indirect illumination with spherical harmonics. Our approach avoids computing
the lengthy indirect bounces and does not suffer from energy loss. Our
experiments on multiple scenes show that our approach achieves superior visual
quality and numerical performance compared to state-of-the-art methods, and it
can generalize to deal with solid objects with opaque surfaces as well.
Related papers
- SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and
Illumination Removal in High-Illuminance Scenes [51.50157919750782]
We present SIRe-IR, an implicit neural rendering inverse approach that decomposes the scene into environment map, albedo, and roughness.
By accurately modeling the indirect radiance field, normal, visibility, and direct light simultaneously, we are able to remove both shadows and indirect illumination.
Even in the presence of intense illumination, our method recovers high-quality albedo and roughness with no shadow interference.
arXiv Detail & Related papers (2023-10-19T10:44:23Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Modeling Indirect Illumination for Inverse Rendering [31.734819333921642]
In this paper, we propose a novel approach to efficiently recovering spatially-varying indirect illumination.
The key insight is that indirect illumination can be conveniently derived from the neural radiance field learned from input images.
Experiments on both synthetic and real data demonstrate the superior performance of our approach compared to previous work.
arXiv Detail & Related papers (2022-04-14T09:10:55Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - Self-supervised Outdoor Scene Relighting [92.20785788740407]
We propose a self-supervised approach for relighting.
Our approach is trained only on corpora of images collected from the internet without any user-supervision.
Results show the ability of our technique to produce photo-realistic and physically plausible results, that generalizes to unseen scenes.
arXiv Detail & Related papers (2021-07-07T09:46:19Z) - Object-Centric Neural Scene Rendering [19.687759175741824]
We present a method for composing photorealistic scenes from captured images of objects.
Our work builds upon neural radiance fields (NeRFs), which implicitly model the volumetric density and directionally-emitted radiance of a scene.
We learn object-centric neural scattering functions (OSFs), a representation that models per-object light transport implicitly using a lighting- and view-dependent neural network.
arXiv Detail & Related papers (2020-12-15T18:55:02Z) - NeRD: Neural Reflectance Decomposition from Image Collections [50.945357655498185]
NeRD is a method that achieves this decomposition by introducing physically-based rendering to neural radiance fields.
Even challenging non-Lambertian reflectances, complex geometry, and unknown illumination can be decomposed to high-quality models.
arXiv Detail & Related papers (2020-12-07T18:45:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.