Photometric Multi-View Mesh Refinement for High-Resolution Satellite
Images
- URL: http://arxiv.org/abs/2005.04777v2
- Date: Tue, 12 May 2020 20:26:34 GMT
- Title: Photometric Multi-View Mesh Refinement for High-Resolution Satellite
Images
- Authors: Mathias Rothermel, Ke Gong, Dieter Fritsch, Konrad Schindler, Norbert
Haala
- Abstract summary: State-of-the-art reconstruction methods typically generate 2.5D elevation data.
We present an approach to recover full 3D surface meshes from multi-view satellite imagery.
- Score: 24.245977127434212
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Modern high-resolution satellite sensors collect optical imagery with ground
sampling distances (GSDs) of 30-50cm, which has sparked a renewed interest in
photogrammetric 3D surface reconstruction from satellite data. State-of-the-art
reconstruction methods typically generate 2.5D elevation data. Here, we present
an approach to recover full 3D surface meshes from multi-view satellite
imagery. The proposed method takes as input a coarse initial mesh and refines
it by iteratively updating all vertex positions to maximize the
photo-consistency between images. Photo-consistency is measured in image space,
by transferring texture from one image to another via the surface. We derive
the equations to propagate changes in texture similarity through the rational
function model (RFM), often also referred to as rational polynomial coefficient
(RPC) model. Furthermore, we devise a hierarchical scheme to optimize the
surface with gradient descent. In experiments with two different datasets, we
show that the refinement improves the initial digital elevation models (DEMs)
generated with conventional dense image matching. Moreover, we demonstrate that
our method is able to reconstruct true 3D geometry, such as facade structures,
if off-nadir views are available.
Related papers
- GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement [51.97726804507328]
We propose a novel approach for 3D mesh reconstruction from multi-view images.
Our method takes inspiration from large reconstruction models that use a transformer-based triplane generator and a Neural Radiance Field (NeRF) model trained on multi-view images.
arXiv Detail & Related papers (2024-06-09T05:19:24Z) - GeoGS3D: Single-view 3D Reconstruction via Geometric-aware Diffusion Model and Gaussian Splatting [81.03553265684184]
We introduce GeoGS3D, a framework for reconstructing detailed 3D objects from single-view images.
We propose a novel metric, Gaussian Divergence Significance (GDS), to prune unnecessary operations during optimization.
Experiments demonstrate that GeoGS3D generates images with high consistency across views and reconstructs high-quality 3D objects.
arXiv Detail & Related papers (2024-03-15T12:24:36Z) - NeuSD: Surface Completion with Multi-View Text-to-Image Diffusion [56.98287481620215]
We present a novel method for 3D surface reconstruction from multiple images where only a part of the object of interest is captured.
Our approach builds on two recent developments: surface reconstruction using neural radiance fields for the reconstruction of the visible parts of the surface, and guidance of pre-trained 2D diffusion models in the form of Score Distillation Sampling (SDS) to complete the shape in unobserved regions in a plausible manner.
arXiv Detail & Related papers (2023-12-07T19:30:55Z) - Wonder3D: Single Image to 3D using Cross-Domain Diffusion [105.16622018766236]
Wonder3D is a novel method for efficiently generating high-fidelity textured meshes from single-view images.
To holistically improve the quality, consistency, and efficiency of image-to-3D tasks, we propose a cross-domain diffusion model.
arXiv Detail & Related papers (2023-10-23T15:02:23Z) - TerrainMesh: Metric-Semantic Terrain Reconstruction from Aerial Images
Using Joint 2D-3D Learning [20.81202315793742]
This paper develops a joint 2D-3D learning approach to reconstruct a local metric-semantic mesh at each camera maintained by a visual odometry algorithm.
The mesh can be assembled into a global environment model to capture the terrain topology and semantics during online operation.
arXiv Detail & Related papers (2022-04-23T05:18:39Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - 3D Surface Reconstruction From Multi-Date Satellite Images [11.84274417463238]
We propose an extension of Structure from Motion (SfM) based pipeline that allows us to reconstruct point clouds from multiple satellite images.
We provide a detailed description of several steps that are mandatory to exploit state-of-the-art mesh reconstruction algorithms in the context of satellite imagery.
We show that the proposed pipeline combined with current meshing algorithms outperforms state-of-the-art point cloud reconstruction algorithms in terms of completeness and median error.
arXiv Detail & Related papers (2021-02-04T09:23:21Z) - Geometric Correspondence Fields: Learned Differentiable Rendering for 3D
Pose Refinement in the Wild [96.09941542587865]
We present a novel 3D pose refinement approach based on differentiable rendering for objects of arbitrary categories in the wild.
In this way, we precisely align 3D models to objects in RGB images which results in significantly improved 3D pose estimates.
We evaluate our approach on the challenging Pix3D dataset and achieve up to 55% relative improvement compared to state-of-the-art refinement methods in multiple metrics.
arXiv Detail & Related papers (2020-07-17T12:34:38Z) - Leveraging Photogrammetric Mesh Models for Aerial-Ground Feature Point
Matching Toward Integrated 3D Reconstruction [19.551088857830944]
Integration of aerial and ground images has been proved as an efficient approach to enhance the surface reconstruction in urban environments.
Previous studies based on geometry-aware image rectification have alleviated this problem.
We propose a novel approach: leveraging photogrammetric mesh models for aerial-ground image matching.
arXiv Detail & Related papers (2020-02-21T01:47:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.