BRDF-NeRF: Neural Radiance Fields with Optical Satellite Images and BRDF Modelling
- URL: http://arxiv.org/abs/2409.12014v3
- Date: Wed, 25 Sep 2024 18:19:45 GMT
- Title: BRDF-NeRF: Neural Radiance Fields with Optical Satellite Images and BRDF Modelling
- Authors: Lulin Zhang, Ewelina Rupnik, Tri Dung Nguyen, Stéphane Jacquemoud, Yann Klinger,
- Abstract summary: We introduce BRDF-NeRF, which incorporates the physically-based semi-empirical Rahman-Pinty-Verstraete (RPV) BRDF model.
BRDF-NeRF successfully synthesizes novel views from unseen angles and generates high-quality digital surface models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance fields (NeRF) have gained prominence as a machine learning technique for representing 3D scenes and estimating the bidirectional reflectance distribution function (BRDF) from multiple images. However, most existing research has focused on close-range imagery, typically modeling scene surfaces with simplified Microfacet BRDF models, which are often inadequate for representing complex Earth surfaces. Furthermore, NeRF approaches generally require large sets of simultaneously captured images for high-quality surface depth reconstruction - a condition rarely met in satellite imaging. To overcome these challenges, we introduce BRDF-NeRF, which incorporates the physically-based semi-empirical Rahman-Pinty-Verstraete (RPV) BRDF model, known to better capture the reflectance properties of natural surfaces. Additionally, we propose guided volumetric sampling and depth supervision to enable radiance field modeling with a minimal number of views. Our method is evaluated on two satellite datasets: (1) Djibouti, captured at varying viewing angles within a single epoch with a fixed Sun position, and (2) Lanzhou, captured across multiple epochs with different Sun positions and viewing angles. Using only three to four satellite images for training, BRDF-NeRF successfully synthesizes novel views from unseen angles and generates high-quality digital surface models (DSMs).
Related papers
- Radiance Fields from Photons [18.15183252935672]
We introduce quanta radiance fields, a class of neural radiance fields that are trained at the granularity of individual photons using single-photon cameras (SPCs)
We demonstrate, both via simulations and a prototype SPC hardware, high-fidelity reconstructions under high-speed motion, in low light, and for extreme dynamic range settings.
arXiv Detail & Related papers (2024-07-12T16:06:51Z) - Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - rpcPRF: Generalizable MPI Neural Radiance Field for Satellite Camera [0.76146285961466]
This paper presents rpcPRF, a Multiplane Images (MPI) based Planar neural Radiance Field for Rational Polynomial Camera (RPC)
We propose to use reprojection supervision to induce the predicted MPI to learn the correct geometry between the 3D coordinates and the images.
We remove the stringent requirement of dense depth supervision from deep multiview-stereo-based methods by introducing rendering techniques of radiance fields.
arXiv Detail & Related papers (2023-10-11T04:05:11Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - VMRF: View Matching Neural Radiance Fields [57.93631771072756]
VMRF is an innovative view matching NeRF that enables effective NeRF training without requiring prior knowledge in camera poses or camera pose distributions.
VMRF introduces a view matching scheme, which exploits unbalanced optimal transport to produce a feature transport plan for mapping a rendered image with randomly camera pose to the corresponding real image.
With the feature transport plan as the guidance, a novel pose calibration technique is designed which rectifies the initially randomized camera poses by predicting relative pose between the pair of rendered and real images.
arXiv Detail & Related papers (2022-07-06T12:26:40Z) - Sat-NeRF: Learning Multi-View Satellite Photogrammetry With Transient
Objects and Shadow Modeling Using RPC Cameras [10.269997499911668]
We introduce the Satellite Neural Radiance Field (Sat-NeRF), a new end-to-end model for learning multi-view satellite photogram in the wild.
Sat-NeRF combines some of the latest trends in neural rendering with native satellite camera models.
We evaluate Sat-NeRF using WorldView-3 images from different locations and stress the advantages of applying a bundle adjustment to the satellite camera models prior to training.
arXiv Detail & Related papers (2022-03-16T19:18:46Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z) - Vision-based Neural Scene Representations for Spacecraft [1.0323063834827415]
In advanced mission concepts, spacecraft need to internally model the pose and shape of nearby orbiting objects.
Recent works in neural scene representations show promising results for inferring generic three-dimensional scenes from optical images.
We compare and evaluate the potential of NeRF and GRAF to render novel views and extract the 3D shape of two different spacecraft.
arXiv Detail & Related papers (2021-05-11T08:35:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.