Shadow Neural Radiance Fields for Multi-view Satellite Photogrammetry
- URL: http://arxiv.org/abs/2104.09877v1
- Date: Tue, 20 Apr 2021 10:17:34 GMT
- Title: Shadow Neural Radiance Fields for Multi-view Satellite Photogrammetry
- Authors: Dawa Derksen, Dario Izzo
- Abstract summary: We present a new generic method for shadow-aware multi-view satellite photogrammetry of Earth Observation scenes.
Our proposed method, the Shadow Neural Radiance Field (S-NeRF), follows recent advances in implicit volumetric representation learning.
For each scene, we train S-NeRF using very high spatial resolution optical images taken from known viewing angles. The learning requires no labels or shape priors: it is self-supervised by an image reconstruction loss.
- Score: 1.370633147306388
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a new generic method for shadow-aware multi-view satellite
photogrammetry of Earth Observation scenes. Our proposed method, the Shadow
Neural Radiance Field (S-NeRF) follows recent advances in implicit volumetric
representation learning. For each scene, we train S-NeRF using very high
spatial resolution optical images taken from known viewing angles. The learning
requires no labels or shape priors: it is self-supervised by an image
reconstruction loss. To accommodate for changing light source conditions both
from a directional light source (the Sun) and a diffuse light source (the sky),
we extend the NeRF approach in two ways. First, direct illumination from the
Sun is modeled via a local light source visibility field. Second, indirect
illumination from a diffuse light source is learned as a non-local color field
as a function of the position of the Sun. Quantitatively, the combination of
these factors reduces the altitude and color errors in shaded areas, compared
to NeRF. The S-NeRF methodology not only performs novel view synthesis and full
3D shape estimation, it also enables shadow detection, albedo synthesis, and
transient object filtering, without any explicit shape supervision.
Related papers
- BRDF-NeRF: Neural Radiance Fields with Optical Satellite Images and BRDF Modelling [0.0]
We introduce BRDF-NeRF, which incorporates the physically-based semi-empirical Rahman-Pinty-Verstraete (RPV) BRDF model.
BRDF-NeRF successfully synthesizes novel views from unseen angles and generates high-quality digital surface models.
arXiv Detail & Related papers (2024-09-18T14:28:52Z) - A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis [6.883971329818549]
We introduce a method to create relightable radiance fields using single-illumination data.
We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction.
We show results on synthetic and real multi-view data under single illumination.
arXiv Detail & Related papers (2024-09-13T16:07:25Z) - SUNDIAL: 3D Satellite Understanding through Direct, Ambient, and Complex
Lighting Decomposition [17.660328148833134]
SUNDIAL is a comprehensive approach to 3D reconstruction of satellite imagery using neural radiance fields.
We learn satellite scene geometry, illumination components, and sun direction in this single-model approach.
We evaluate the performance of SUNDIAL against existing NeRF-based techniques for satellite scene modeling.
arXiv Detail & Related papers (2023-12-24T02:46:44Z) - PlatoNeRF: 3D Reconstruction in Plato's Cave via Single-View Two-Bounce Lidar [25.332440946211236]
3D reconstruction from a single-view is challenging because of the ambiguity from monocular cues and lack of information about occluded regions.
We propose using time-of-flight data captured by a single-photon avalanche diode to overcome these limitations.
We demonstrate that we can reconstruct visible and occluded geometry without data priors or reliance on controlled ambient lighting or scene albedo.
arXiv Detail & Related papers (2023-12-21T18:59:53Z) - Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption [65.96818069005145]
We introduce the concept of a "Concealing Field," which assigns transmittance values to the surrounding air to account for illumination effects.
In dark scenarios, we assume that object emissions maintain a standard lighting level but are attenuated as they traverse the air during the rendering process.
We present a comprehensive multi-view dataset captured under challenging illumination conditions for evaluation.
arXiv Detail & Related papers (2023-12-14T16:24:09Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - NeRF applied to satellite imagery for surface reconstruction [5.027411102165872]
We present Surf-NeRF, a modified implementation of the recently introduced Shadow Neural Radiance Field (S-NeRF) model.
This method is able to synthesize novel views from a sparse set of satellite images of a scene, while accounting for the variation in lighting present in the pictures.
The trained model can also be used to accurately estimate the surface elevation of the scene, which is often a desirable quantity for satellite observation applications.
arXiv Detail & Related papers (2023-04-09T01:37:13Z) - Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields [65.96818069005145]
Vanilla NeRF is viewer-centred simplifies the rendering process only as light emission from 3D locations in the viewing direction.
Inspired by the emission theory of ancient Greeks, we make slight modifications on vanilla NeRF to train on multiple views of low-light scenes.
We introduce a surrogate concept, Concealing Fields, that reduces the transport of light during the volume rendering stage.
arXiv Detail & Related papers (2023-03-10T09:28:09Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.