SUNDIAL: 3D Satellite Understanding through Direct, Ambient, and Complex
Lighting Decomposition
- URL: http://arxiv.org/abs/2312.16215v1
- Date: Sun, 24 Dec 2023 02:46:44 GMT
- Title: SUNDIAL: 3D Satellite Understanding through Direct, Ambient, and Complex
Lighting Decomposition
- Authors: Nikhil Behari, Akshat Dave, Kushagra Tiwary, William Yang, Ramesh
Raskar
- Abstract summary: SUNDIAL is a comprehensive approach to 3D reconstruction of satellite imagery using neural radiance fields.
We learn satellite scene geometry, illumination components, and sun direction in this single-model approach.
We evaluate the performance of SUNDIAL against existing NeRF-based techniques for satellite scene modeling.
- Score: 17.660328148833134
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D modeling from satellite imagery is essential in areas of environmental
science, urban planning, agriculture, and disaster response. However,
traditional 3D modeling techniques face unique challenges in the remote sensing
context, including limited multi-view baselines over extensive regions, varying
direct, ambient, and complex illumination conditions, and time-varying scene
changes across captures. In this work, we introduce SUNDIAL, a comprehensive
approach to 3D reconstruction of satellite imagery using neural radiance
fields. We jointly learn satellite scene geometry, illumination components, and
sun direction in this single-model approach, and propose a secondary shadow ray
casting technique to 1) improve scene geometry using oblique sun angles to
render shadows, 2) enable physically-based disentanglement of scene albedo and
illumination, and 3) determine the components of illumination from direct,
ambient (sky), and complex sources. To achieve this, we incorporate lighting
cues and geometric priors from remote sensing literature in a neural rendering
approach, modeling physical properties of satellite scenes such as shadows,
scattered sky illumination, and complex illumination and shading of vegetation
and water. We evaluate the performance of SUNDIAL against existing NeRF-based
techniques for satellite scene modeling and demonstrate improved scene and
lighting disentanglement, novel view and lighting rendering, and geometry and
sun direction estimation on challenging scenes with small baselines, sparse
inputs, and variable illumination.
Related papers
- Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - Holistic Inverse Rendering of Complex Facade via Aerial 3D Scanning [38.72679977945778]
We use multi-view aerial images to reconstruct the geometry, lighting, and material of facades using neural signed distance fields (SDFs)
The experiment demonstrates the superior quality of our method on facade holistic inverse rendering, novel view synthesis, and scene editing compared to state-of-the-art baselines.
arXiv Detail & Related papers (2023-11-20T15:03:56Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - SPIDR: SDF-based Neural Point Fields for Illumination and Deformation [4.246563675883777]
We introduce SPIDR, a new hybrid neural SDF representation.
We propose a novel neural implicit model to learn environment light.
We demonstrate the effectiveness of SPIDR in enabling high quality geometry editing with more accurate updates to the illumination of the scene.
arXiv Detail & Related papers (2022-10-15T23:34:53Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z) - Shadow Neural Radiance Fields for Multi-view Satellite Photogrammetry [1.370633147306388]
We present a new generic method for shadow-aware multi-view satellite photogrammetry of Earth Observation scenes.
Our proposed method, the Shadow Neural Radiance Field (S-NeRF), follows recent advances in implicit volumetric representation learning.
For each scene, we train S-NeRF using very high spatial resolution optical images taken from known viewing angles. The learning requires no labels or shape priors: it is self-supervised by an image reconstruction loss.
arXiv Detail & Related papers (2021-04-20T10:17:34Z) - NeRV: Neural Reflectance and Visibility Fields for Relighting and View
Synthesis [45.71507069571216]
We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting.
This produces a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions.
arXiv Detail & Related papers (2020-12-07T18:56:08Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.