FusionRF: High-Fidelity Satellite Neural Radiance Fields from Multispectral and Panchromatic Acquisitions
- URL: http://arxiv.org/abs/2409.15132v1
- Date: Mon, 23 Sep 2024 15:38:03 GMT
- Title: FusionRF: High-Fidelity Satellite Neural Radiance Fields from Multispectral and Panchromatic Acquisitions
- Authors: Michael Sprintson, Rama Chellappa, Cheng Peng,
- Abstract summary: We introduce FusionRF, a novel neural rendering terrain reconstruction method from optically unprocessed satellite imagery.
We evaluate our method on multispectral and panchromatic satellite images from the WorldView-3 satellite in various locations.
- Score: 38.55830312265355
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce FusionRF, a novel neural rendering terrain reconstruction method from optically unprocessed satellite imagery. While previous methods depend on external pansharpening methods to fuse low resolution multispectral imagery and high resolution panchromatic imagery, FusionRF directly performs reconstruction based on optically unprocessed acquisitions with no prior knowledge. This is accomplished through the addition of a sharpening kernel which models the resolution loss in multispectral images. Additionally, novel modal embeddings allow the model to perform image fusion as a bottleneck to novel view synthesis. We evaluate our method on multispectral and panchromatic satellite images from the WorldView-3 satellite in various locations, and FusionRF outperforms previous State-of-The-Art methods in depth reconstruction on unprocessed imagery, renders sharp training and novel views, and retains multi-spectral information.
Related papers
- RelitLRM: Generative Relightable Radiance for Large Reconstruction Models [52.672706620003765]
We propose RelitLRM for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations.
Unlike prior inverse rendering methods requiring dense captures and slow optimization, RelitLRM adopts a feed-forward transformer-based model.
We show our sparse-view feed-forward RelitLRM offers competitive relighting results to state-of-the-art dense-view optimization-based baselines.
arXiv Detail & Related papers (2024-10-08T17:40:01Z) - ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - NePF: Neural Photon Field for Single-Stage Inverse Rendering [6.977356702921476]
We present a novel single-stage framework, Neural Photon Field (NePF), to address the ill-posed inverse rendering from multi-view images.
NePF achieves this unification by fully utilizing the physical implication behind the weight function of neural implicit surfaces.
We evaluate our method on both real and synthetic datasets.
arXiv Detail & Related papers (2023-11-20T06:15:46Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Enhancement of Novel View Synthesis Using Omnidirectional Image
Completion [61.78187618370681]
We present a method for synthesizing novel views from a single 360-degree RGB-D image based on the neural radiance field (NeRF)
Experiments demonstrated that the proposed method can synthesize plausible novel views while preserving the features of the scene for both artificial and real-world data.
arXiv Detail & Related papers (2022-03-18T13:49:25Z) - Interpretable Deep Multimodal Image Super-Resolution [23.48305854574444]
Multimodal image super-resolution (SR) is the reconstruction of a high resolution image given a low-resolution observation with the aid of another image modality.
We present a multimodal deep network design that integrates coupled sparse priors and allows the effective fusion of information from another modality into the reconstruction process.
arXiv Detail & Related papers (2020-09-07T14:08:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.