Domain-invariant Similarity Activation Map Contrastive Learning for
Retrieval-based Long-term Visual Localization
- URL: http://arxiv.org/abs/2009.07719v4
- Date: Thu, 14 Oct 2021 17:27:32 GMT
- Title: Domain-invariant Similarity Activation Map Contrastive Learning for
Retrieval-based Long-term Visual Localization
- Authors: Hanjiang Hu, Hesheng Wang, Zhe Liu and Weidong Chen
- Abstract summary: In this work, a general architecture is first formulated probabilistically to extract domain invariant feature through multi-domain image translation.
And then a novel gradient-weighted similarity activation mapping loss (Grad-SAM) is incorporated for finer localization with high accuracy.
Extensive experiments have been conducted to validate the effectiveness of the proposed approach on the CMUSeasons dataset.
Our performance is on par with or even outperforms the state-of-the-art image-based localization baselines in medium or high precision.
- Score: 30.203072945001136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual localization is a crucial component in the application of mobile robot
and autonomous driving. Image retrieval is an efficient and effective technique
in image-based localization methods. Due to the drastic variability of
environmental conditions, e.g. illumination, seasonal and weather changes,
retrieval-based visual localization is severely affected and becomes a
challenging problem. In this work, a general architecture is first formulated
probabilistically to extract domain invariant feature through multi-domain
image translation. And then a novel gradient-weighted similarity activation
mapping loss (Grad-SAM) is incorporated for finer localization with high
accuracy. We also propose a new adaptive triplet loss to boost the contrastive
learning of the embedding in a self-supervised manner. The final coarse-to-fine
image retrieval pipeline is implemented as the sequential combination of models
without and with Grad-SAM loss. Extensive experiments have been conducted to
validate the effectiveness of the proposed approach on the CMUSeasons dataset.
The strong generalization ability of our approach is verified on RobotCar
dataset using models pre-trained on urban part of CMU-Seasons dataset. Our
performance is on par with or even outperforms the state-of-the-art image-based
localization baselines in medium or high precision, especially under the
challenging environments with illumination variance, vegetation and night-time
images. The code and pretrained models are available on
https://github.com/HanjiangHu/DISAM.
Related papers
- Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Learning Image Deraining Transformer Network with Dynamic Dual
Self-Attention [46.11162082219387]
This paper proposes an effective image deraining Transformer with dynamic dual self-attention (DDSA)
Specifically, we only select the most useful similarity values based on top-k approximate calculation to achieve sparse attention.
In addition, we also develop a novel spatial-enhanced feed-forward network (SEFN) to further obtain a more accurate representation for achieving high-quality derained results.
arXiv Detail & Related papers (2023-08-15T13:59:47Z) - Improving Transformer-based Image Matching by Cascaded Capturing
Spatially Informative Keypoints [44.90917854990362]
We propose a transformer-based cascade matching model -- Cascade feature Matching TRansformer (CasMTR)
We use a simple yet effective Non-Maximum Suppression (NMS) post-process to filter keypoints through the confidence map.
CasMTR achieves state-of-the-art performance in indoor and outdoor pose estimation as well as visual localization.
arXiv Detail & Related papers (2023-03-06T04:32:34Z) - Towards Scale Consistent Monocular Visual Odometry by Learning from the
Virtual World [83.36195426897768]
We propose VRVO, a novel framework for retrieving the absolute scale from virtual data.
We first train a scale-aware disparity network using both monocular real images and stereo virtual data.
The resulting scale-consistent disparities are then integrated with a direct VO system.
arXiv Detail & Related papers (2022-03-11T01:51:54Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z) - DASGIL: Domain Adaptation for Semantic and Geometric-aware Image-based
Localization [27.294822556484345]
Long-term visual localization under changing environments is a challenging problem in autonomous driving and mobile robotics.
We propose a novel multi-task architecture to fuse the geometric and semantic information into the multi-scale latent embedding representation for visual place recognition.
arXiv Detail & Related papers (2020-10-01T17:44:25Z) - Improving Learning Effectiveness For Object Detection and Classification
in Cluttered Backgrounds [6.729108277517129]
This paper develops a framework that permits to autonomously generate a training dataset in heterogeneous cluttered backgrounds.
It is clear that the learning effectiveness of the proposed framework should be improved in complex and heterogeneous environments.
The performance of the proposed framework is investigated through empirical tests and compared with that of the model trained with the COCO dataset.
arXiv Detail & Related papers (2020-02-27T22:28:48Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.