Feature matching for multi-epoch historical aerial images
- URL: http://arxiv.org/abs/2112.04255v1
- Date: Wed, 8 Dec 2021 12:28:24 GMT
- Title: Feature matching for multi-epoch historical aerial images
- Authors: Lulin Zhang, Ewelina Rupnik, Marc Pierrot-Deseilligny
- Abstract summary: We present a fully automatic approach to detecting feature correspondences between historical images taken at different times.
Compared to the state-of-the-art, our method improves the image georeferencing accuracy by a factor of 2.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Historical imagery is characterized by high spatial resolution and
stereo-scopic acquisitions, providing a valuable resource for recovering 3D
land-cover information. Accurate geo-referencing of diachronic historical
images by means of self-calibration remains a bottleneck because of the
difficulty to find sufficient amount of feature correspondences under evolving
landscapes. In this research, we present a fully automatic approach to
detecting feature correspondences between historical images taken at different
times (i.e., inter-epoch), without auxiliary data required. Based on relative
orientations computed within the same epoch (i.e., intra-epoch), we obtain DSMs
(Digital Surface Model) and incorporate them in a rough-to-precise matching.
The method consists of: (1) an inter-epoch DSMs matching to roughly co-register
the orientations and DSMs (i.e, the 3D Helmert transformation), followed by (2)
a precise inter-epoch feature matching using the original RGB images. The
innate ambiguity of the latter is largely alleviated by narrowing down the
search space using the co-registered data. With the inter-epoch features, we
refine the image orientations and quantitatively evaluate the results (1) with
DoD (Difference of DSMs), (2) with ground check points, and (3) by quantifying
ground displacement due to an earthquake. We demonstrate that our method: (1)
can automatically georeference diachronic historical images; (2) can
effectively mitigate systematic errors induced by poorly estimated camera
parameters; (3) is robust to drastic scene changes. Compared to the
state-of-the-art, our method improves the image georeferencing accuracy by a
factor of 2. The proposed methods are implemented in MicMac, a free,
open-source photogrammetric software.
Related papers
- Hierarchical Temporal Context Learning for Camera-based Semantic Scene Completion [57.232688209606515]
We present HTCL, a novel Temporal Temporal Context Learning paradigm for improving camera-based semantic scene completion.
Our method ranks $1st$ on the Semantic KITTI benchmark and even surpasses LiDAR-based methods in terms of mIoU.
arXiv Detail & Related papers (2024-07-02T09:11:17Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - LFM-3D: Learnable Feature Matching Across Wide Baselines Using 3D
Signals [9.201550006194994]
Learnable matchers often underperform when there exists only small regions of co-visibility between image pairs.
We propose LFM-3D, a Learnable Feature Matching framework that uses models based on graph neural networks.
We show that the resulting improved correspondences lead to much higher relative posing accuracy for in-the-wild image pairs.
arXiv Detail & Related papers (2023-03-22T17:46:27Z) - Improving Feature-based Visual Localization by Geometry-Aided Matching [21.1967752160412]
We introduce a novel 2D-3D matching method, Geometry-Aided Matching (GAM), which uses both appearance information and geometric context to improve 2D-3D feature matching.
GAM can greatly strengthen the recall of 2D-3D matches while maintaining high precision.
Our proposed localization method achieves state-of-the-art results on multiple visual localization datasets.
arXiv Detail & Related papers (2022-11-16T07:02:12Z) - Unsupervised Foggy Scene Understanding via Self Spatial-Temporal Label
Diffusion [51.11295961195151]
We exploit the characteristics of the foggy image sequence of driving scenes to densify the confident pseudo labels.
Based on the two discoveries of local spatial similarity and adjacent temporal correspondence of the sequential image data, we propose a novel Target-Domain driven pseudo label Diffusion scheme.
Our scheme helps the adaptive model achieve 51.92% and 53.84% mean intersection-over-union (mIoU) on two publicly available natural foggy datasets.
arXiv Detail & Related papers (2022-06-10T05:16:50Z) - AI-supported Framework of Semi-Automatic Monoplotting for Monocular
Oblique Visual Data Analysis [0.0]
We propose and demonstrate a novel semi-automatic monoplotting framework that provides pixel-level correspondence between photos and Digital Elevation Model (DEM)
A pipeline of analyses was developed including key point detection in images and DEMs, retrieving georeferenced 3D DEMs, regularized pose estimation, gradient-based optimization, and the identification between image pixels and real world coordinates.
arXiv Detail & Related papers (2021-11-28T02:03:43Z) - Soft Expectation and Deep Maximization for Image Feature Detection [68.8204255655161]
We propose SEDM, an iterative semi-supervised learning process that flips the question and first looks for repeatable 3D points, then trains a detector to localize them in image space.
Our results show that this new model trained using SEDM is able to better localize the underlying 3D points in a scene.
arXiv Detail & Related papers (2021-04-21T00:35:32Z) - Lidar-Monocular Surface Reconstruction Using Line Segments [5.542669744873386]
We propose to leverage common geometric features that are detected in both the LIDAR scans and image data, allowing data from the two sensors to be processed in a higher-level space.
We show that our method delivers results that are comparable to a state-of-the-art LIDAR survey while not requiring highly accurate ground truth pose estimates.
arXiv Detail & Related papers (2021-04-06T19:49:53Z) - Refer-it-in-RGBD: A Bottom-up Approach for 3D Visual Grounding in RGBD
Images [69.5662419067878]
Grounding referring expressions in RGBD image has been an emerging field.
We present a novel task of 3D visual grounding in single-view RGBD image where the referred objects are often only partially scanned due to occlusion.
Our approach first fuses the language and the visual features at the bottom level to generate a heatmap that localizes the relevant regions in the RGBD image.
Then our approach conducts an adaptive feature learning based on the heatmap and performs the object-level matching with another visio-linguistic fusion to finally ground the referred object.
arXiv Detail & Related papers (2021-03-14T11:18:50Z) - High-Order Information Matters: Learning Relation and Topology for
Occluded Person Re-Identification [84.43394420267794]
We propose a novel framework by learning high-order relation and topology information for discriminative features and robust alignment.
Our framework significantly outperforms state-of-the-art by6.5%mAP scores on Occluded-Duke dataset.
arXiv Detail & Related papers (2020-03-18T12:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.