A Two-Stream Symmetric Network with Bidirectional Ensemble for Aerial
Image Matching
- URL: http://arxiv.org/abs/2002.01325v1
- Date: Tue, 4 Feb 2020 14:38:18 GMT
- Title: A Two-Stream Symmetric Network with Bidirectional Ensemble for Aerial
Image Matching
- Authors: Jae-Hyun Park, Woo-Jeoung Nam, Seong-Whan Lee
- Abstract summary: We propose a novel method to precisely match two aerial images that were obtained in different environments via a two-stream deep network.
By internally augmenting the target image, the network considers the two-stream with the three input images and reflects the additional augmented pair in the training.
- Score: 24.089374888914143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel method to precisely match two aerial images
that were obtained in different environments via a two-stream deep network. By
internally augmenting the target image, the network considers the two-stream
with the three input images and reflects the additional augmented pair in the
training. As a result, the training process of the deep network is regularized
and the network becomes robust for the variance of aerial images. Furthermore,
we introduce an ensemble method that is based on the bidirectional network,
which is motivated by the isomorphic nature of the geometric transformation. We
obtain two global transformation parameters without any additional network or
parameters, which alleviate asymmetric matching results and enable significant
improvement in performance by fusing two outcomes. For the experiment, we adopt
aerial images from Google Earth and the International Society for
Photogrammetry and Remote Sensing (ISPRS). To quantitatively assess our result,
we apply the probability of correct keypoints (PCK) metric, which measures the
degree of matching. The qualitative and quantitative results show the sizable
gap of performance compared to the conventional methods for matching the aerial
images. All code and our trained model, as well as the dataset are available
online.
Related papers
- Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - A Model-data-driven Network Embedding Multidimensional Features for
Tomographic SAR Imaging [5.489791364472879]
We propose a new model-data-driven network to achieve tomoSAR imaging based on multi-dimensional features.
We add two 2D processing modules, both convolutional encoder-decoder structures, to enhance multi-dimensional features of the imaging scene effectively.
Compared with the conventional CS-based FISTA method and DL-based gamma-Net method, the result of our proposed method has better performance on completeness while having decent imaging accuracy.
arXiv Detail & Related papers (2022-11-28T02:01:43Z) - Learning Contrastive Representation for Semantic Correspondence [150.29135856909477]
We propose a multi-level contrastive learning approach for semantic matching.
We show that image-level contrastive learning is a key component to encourage the convolutional features to find correspondence between similar objects.
arXiv Detail & Related papers (2021-09-22T18:34:14Z) - Precise Aerial Image Matching based on Deep Homography Estimation [21.948001630564363]
We propose a deep homography alignment network to precisely match two aerial images.
The proposed network is possible to train the matching network with a higher degree of freedom.
We introduce a method that can effectively learn the difficult-to-learn homography estimation network.
arXiv Detail & Related papers (2021-07-19T11:52:52Z) - DFM: A Performance Baseline for Deep Feature Matching [10.014010310188821]
The proposed method uses pre-trained VGG architecture as a feature extractor and does not require any additional training specific to improve matching.
Our algorithm achieves 0.57 and 0.80 overall scores in terms of Mean Matching Accuracy (MMA) for 1 pixel and 2 pixels thresholds respectively on Hpatches dataset.
arXiv Detail & Related papers (2021-06-14T22:55:06Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Rotation Invariant Aerial Image Retrieval with Group Convolutional
Metric Learning [21.89786914625517]
We introduce a novel method for retrieving aerial images by merging group convolution with attention mechanism and metric learning.
Results show that the proposed method performance exceeds other state-of-the-art retrieval methods in both rotated and original environments.
arXiv Detail & Related papers (2020-10-19T04:12:36Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z) - An End-to-End Network for Co-Saliency Detection in One Single Image [47.35448093528382]
Co-saliency detection within a single image is a common vision problem that has not yet been well addressed.
This study proposes a novel end-to-end trainable network comprising a backbone net and two branch nets.
We construct a new dataset of 2,019 natural images with co-saliency in each image to evaluate the proposed method.
arXiv Detail & Related papers (2019-10-25T16:00:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.