Sat-DN: Implicit Surface Reconstruction from Multi-View Satellite Images with Depth and Normal Supervision
- URL: http://arxiv.org/abs/2502.08352v1
- Date: Wed, 12 Feb 2025 12:27:32 GMT
- Title: Sat-DN: Implicit Surface Reconstruction from Multi-View Satellite Images with Depth and Normal Supervision
- Authors: Tianle Liu, Shuangming Zhao, Wanshou Jiang, Bingxuan Guo,
- Abstract summary: High-resolution satellite imagery has become increasingly accessible, enabling rapid and location-independent ground model reconstruction.
Traditional stereo matching methods struggle to capture fine details, while neural radiance fields (NeRFs) achieve high-quality reconstructions.
We propose Sat-DN, a novel framework leveraging a progressively trained multi-resolution hash grid reconstruction architecture.
- Score: 1.7999333451993949
- License:
- Abstract: With advancements in satellite imaging technology, acquiring high-resolution multi-view satellite imagery has become increasingly accessible, enabling rapid and location-independent ground model reconstruction. However, traditional stereo matching methods struggle to capture fine details, and while neural radiance fields (NeRFs) achieve high-quality reconstructions, their training time is prohibitively long. Moreover, challenges such as low visibility of building facades, illumination and style differences between pixels, and weakly textured regions in satellite imagery further make it hard to reconstruct reasonable terrain geometry and detailed building facades. To address these issues, we propose Sat-DN, a novel framework leveraging a progressively trained multi-resolution hash grid reconstruction architecture with explicit depth guidance and surface normal consistency constraints to enhance reconstruction quality. The multi-resolution hash grid accelerates training, while the progressive strategy incrementally increases the learning frequency, using coarse low-frequency geometry to guide the reconstruction of fine high-frequency details. The depth and normal constraints ensure a clear building outline and correct planar distribution. Extensive experiments on the DFC2019 dataset demonstrate that Sat-DN outperforms existing methods, achieving state-of-the-art results in both qualitative and quantitative evaluations. The code is available at https://github.com/costune/SatDN.
Related papers
- $R^2$-Mesh: Reinforcement Learning Powered Mesh Reconstruction via Geometry and Appearance Refinement [5.810659946867557]
Mesh reconstruction based on Neural Radiance Fields (NeRF) is popular in a variety of applications such as computer graphics, virtual reality, and medical imaging.
We propose a novel algorithm that progressively generates and optimize meshes from multi-view images.
Our method delivers highly competitive and robust performance in both mesh rendering quality and geometric quality.
arXiv Detail & Related papers (2024-08-19T16:33:17Z) - VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for Enhanced Indoor View Synthesis [73.50359502037232]
VoxNeRF is a novel approach to enhance the quality and efficiency of neural indoor reconstruction and novel view synthesis.
We propose an efficient voxel-guided sampling technique that allocates computational resources to selectively the most relevant segments of rays.
Our approach is validated with extensive experiments on ScanNet and ScanNet++.
arXiv Detail & Related papers (2023-11-09T11:32:49Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction [142.61256012419562]
We present Voxurf, a voxel-based surface reconstruction approach that is both efficient and accurate.
Voxurf addresses the aforementioned issues via several key designs, including 1) a two-stage training procedure that attains a coherent coarse shape and recovers fine details successively, 2) a dual color network that maintains color-geometry dependency, and 3) a hierarchical geometry feature to encourage information propagation across voxels.
arXiv Detail & Related papers (2022-08-26T14:48:02Z) - Critical Regularizations for Neural Surface Reconstruction in the Wild [26.460011241432092]
We present RegSDF, which shows that proper point cloud supervisions and geometry regularizations are sufficient to produce high-quality and robust reconstruction results.
RegSDF is able to reconstruct surfaces with fine details even for open scenes with complex topologies and unstructured camera trajectories.
arXiv Detail & Related papers (2022-06-07T08:11:22Z) - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction [88.02850205432763]
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
arXiv Detail & Related papers (2021-06-20T12:59:42Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Super-Resolving Beyond Satellite Hardware Using Realistically Degraded
Images [0.23090185577016442]
We test the feasibility of using deep SR in real remote sensing payloads by assessing SR performance in reconstructing realistically degraded satellite images.
We demonstrate that a state-of-the-art SR technique called Enhanced Deep Super-Resolution Network (EDSR) can recover encoded pixel data on images with poor ground sampling distance.
arXiv Detail & Related papers (2021-03-10T00:20:33Z) - 3D Surface Reconstruction From Multi-Date Satellite Images [11.84274417463238]
We propose an extension of Structure from Motion (SfM) based pipeline that allows us to reconstruct point clouds from multiple satellite images.
We provide a detailed description of several steps that are mandatory to exploit state-of-the-art mesh reconstruction algorithms in the context of satellite imagery.
We show that the proposed pipeline combined with current meshing algorithms outperforms state-of-the-art point cloud reconstruction algorithms in terms of completeness and median error.
arXiv Detail & Related papers (2021-02-04T09:23:21Z) - Fusion of Deep and Non-Deep Methods for Fast Super-Resolution of
Satellite Images [54.44842669325082]
This work proposes to bridge the gap between image quality and the price by improving the image quality via super-resolution (SR)
We design an SR framework that analyzes the regional information content on each patch of the low-resolution image.
We show substantial decrease in inference time while achieving similar performance to that of existing deep SR methods.
arXiv Detail & Related papers (2020-08-03T13:55:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.