The Sky's the Limit: Re-lightable Outdoor Scenes via a Sky-pixel Constrained Illumination Prior and Outside-In Visibility
- URL: http://arxiv.org/abs/2311.16937v2
- Date: Tue, 30 Jul 2024 12:46:04 GMT
- Title: The Sky's the Limit: Re-lightable Outdoor Scenes via a Sky-pixel Constrained Illumination Prior and Outside-In Visibility
- Authors: James A. D. Gardner, Evgenii Kashin, Bernhard Egger, William A. P. Smith,
- Abstract summary: Inverse rendering of outdoor scenes from unconstrained image collections is a challenging task.
We exploit the fact that any sky pixel provides a direct observation of distant lighting.
Our method estimates high-quality albedo, geometry, illumination and sky visibility.
- Score: 18.46907109338604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inverse rendering of outdoor scenes from unconstrained image collections is a challenging task, particularly illumination/albedo ambiguities and occlusion of the illumination environment (shadowing) caused by geometry. However, there are many cues in an image that can aid in the disentanglement of geometry, albedo and shadows. Whilst sky is frequently masked out in state-of-the-art methods, we exploit the fact that any sky pixel provides a direct observation of distant lighting in the corresponding direction and, via a neural illumination prior, a statistical cue to derive the remaining illumination environment. The incorporation of our illumination prior is enabled by a novel `outside-in' method for computing differentiable sky visibility based on a neural directional distance function. This is highly efficient and can be trained in parallel with the neural scene representation, allowing gradients from appearance loss to flow from shadows to influence the estimation of illumination and geometry. Our method estimates high-quality albedo, geometry, illumination and sky visibility, achieving state-of-the-art results on the NeRF-OSR relighting benchmark. Our code and models can be found at https://github.com/JADGardner/neusky
Related papers
- GaNI: Global and Near Field Illumination Aware Neural Inverse Rendering [21.584362527926654]
GaNI can reconstruct geometry, albedo, and roughness parameters from images of a scene captured with co-located light and camera.
Existing inverse rendering techniques with co-located light-camera focus on single objects only.
arXiv Detail & Related papers (2024-03-22T23:47:19Z) - SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and
Illumination Removal in High-Illuminance Scenes [51.50157919750782]
We present SIRe-IR, an implicit neural rendering inverse approach that decomposes the scene into environment map, albedo, and roughness.
By accurately modeling the indirect radiance field, normal, visibility, and direct light simultaneously, we are able to remove both shadows and indirect illumination.
Even in the presence of intense illumination, our method recovers high-quality albedo and roughness with no shadow interference.
arXiv Detail & Related papers (2023-10-19T10:44:23Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields [65.96818069005145]
Vanilla NeRF is viewer-centred simplifies the rendering process only as light emission from 3D locations in the viewing direction.
Inspired by the emission theory of ancient Greeks, we make slight modifications on vanilla NeRF to train on multiple views of low-light scenes.
We introduce a surrogate concept, Concealing Fields, that reduces the transport of light during the volume rendering stage.
arXiv Detail & Related papers (2023-03-10T09:28:09Z) - Geometry-aware Single-image Full-body Human Relighting [37.381122678376805]
Single-image human relighting aims to relight a target human under new lighting conditions by decomposing the input image into albedo, shape and lighting.
Previous methods suffer from both the entanglement between albedo and lighting and the lack of hard shadows.
Our framework is able to generate photo-realistic high-frequency shadows such as cast shadows under challenging lighting conditions.
arXiv Detail & Related papers (2022-07-11T10:21:02Z) - Modeling Indirect Illumination for Inverse Rendering [31.734819333921642]
In this paper, we propose a novel approach to efficiently recovering spatially-varying indirect illumination.
The key insight is that indirect illumination can be conveniently derived from the neural radiance field learned from input images.
Experiments on both synthetic and real data demonstrate the superior performance of our approach compared to previous work.
arXiv Detail & Related papers (2022-04-14T09:10:55Z) - Neural Radiance Fields for Outdoor Scene Relighting [70.97747511934705]
We present NeRF-OSR, the first approach for outdoor scene relighting based on neural radiance fields.
In contrast to the prior art, our technique allows simultaneous editing of both scene illumination and camera viewpoint.
It also includes a dedicated network for shadow reproduction, which is crucial for high-quality outdoor scene relighting.
arXiv Detail & Related papers (2021-12-09T18:59:56Z) - Neural Relightable Participating Media Rendering [26.431106015677]
We learn neural representations for participating media with a complete simulation of global illumination.
Our approach achieves superior visual quality and numerical performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-10-25T14:36:15Z) - Self-supervised Outdoor Scene Relighting [92.20785788740407]
We propose a self-supervised approach for relighting.
Our approach is trained only on corpora of images collected from the internet without any user-supervision.
Results show the ability of our technique to produce photo-realistic and physically plausible results, that generalizes to unseen scenes.
arXiv Detail & Related papers (2021-07-07T09:46:19Z) - Shadow Neural Radiance Fields for Multi-view Satellite Photogrammetry [1.370633147306388]
We present a new generic method for shadow-aware multi-view satellite photogrammetry of Earth Observation scenes.
Our proposed method, the Shadow Neural Radiance Field (S-NeRF), follows recent advances in implicit volumetric representation learning.
For each scene, we train S-NeRF using very high spatial resolution optical images taken from known viewing angles. The learning requires no labels or shape priors: it is self-supervised by an image reconstruction loss.
arXiv Detail & Related papers (2021-04-20T10:17:34Z) - NeRD: Neural Reflectance Decomposition from Image Collections [50.945357655498185]
NeRD is a method that achieves this decomposition by introducing physically-based rendering to neural radiance fields.
Even challenging non-Lambertian reflectances, complex geometry, and unknown illumination can be decomposed to high-quality models.
arXiv Detail & Related papers (2020-12-07T18:45:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.