Active Visual Localization in Partially Calibrated Environments
- URL: http://arxiv.org/abs/2012.04263v1
- Date: Tue, 8 Dec 2020 08:00:55 GMT
- Title: Active Visual Localization in Partially Calibrated Environments
- Authors: Yingda Yin, Qingnan Fan, Fei Xia, Qihang Fang, Siyan Dong, Leonidas
Guibas, Baoquan Chen
- Abstract summary: Humans can robustly localize themselves without a map after they get lost following prominent visual cues or landmarks.
In this work, we aim at endowing autonomous agents the same ability. Such ability is important in robotics applications yet very challenging when an agent is exposed to partially calibrated environments.
We propose an indoor scene dataset ACR-6, which consists of both synthetic and real data and simulates challenging scenarios for active visual localization.
- Score: 35.48595012305253
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans can robustly localize themselves without a map after they get lost
following prominent visual cues or landmarks. In this work, we aim at endowing
autonomous agents the same ability. Such ability is important in robotics
applications yet very challenging when an agent is exposed to partially
calibrated environments, where camera images with accurate 6 Degree-of-Freedom
pose labels only cover part of the scene. To address the above challenge, we
explore using Reinforcement Learning to search for a policy to generate
intelligent motions so as to actively localize the agent given visual
information in partially calibrated environments. Our core contribution is to
formulate the active visual localization problem as a Partially Observable
Markov Decision Process and propose an algorithmic framework based on Deep
Reinforcement Learning to solve it. We further propose an indoor scene dataset
ACR-6, which consists of both synthetic and real data and simulates challenging
scenarios for active visual localization. We benchmark our algorithm against
handcrafted baselines for localization and demonstrate that our approach
significantly outperforms them on localization success rate.
Related papers
- Leveraging Spatial Attention and Edge Context for Optimized Feature Selection in Visual Localization [0.0]
We introduce an attention network that selectively targets informative regions of the image.
Using this network, we identify the highest-scoring features to improve the feature selection process and combine the result with edge detection.
Our approach was tested on the outdoor benchmark dataset, demonstrating superior results compared to previous methods.
arXiv Detail & Related papers (2024-10-16T05:00:51Z) - Swarm Intelligence in Geo-Localization: A Multi-Agent Large Vision-Language Model Collaborative Framework [51.26566634946208]
We introduce smileGeo, a novel visual geo-localization framework.
By inter-agent communication, smileGeo integrates the inherent knowledge of these agents with additional retrieved information.
Results show that our approach significantly outperforms current state-of-the-art methods.
arXiv Detail & Related papers (2024-08-21T03:31:30Z) - Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - Point-Level Region Contrast for Object Detection Pre-Training [147.47349344401806]
We present point-level region contrast, a self-supervised pre-training approach for the task of object detection.
Our approach performs contrastive learning by directly sampling individual point pairs from different regions.
Compared to an aggregated representation per region, our approach is more robust to the change in input region quality.
arXiv Detail & Related papers (2022-02-09T18:56:41Z) - CrowdDriven: A New Challenging Dataset for Outdoor Visual Localization [44.97567243883994]
We propose a new benchmark for visual localization in outdoor scenes using crowd-sourced data.
We show that our dataset is very challenging, with all evaluated methods failing on its hardest parts.
As part of the dataset release, we provide the tooling used to generate it, enabling efficient and effective 2D correspondence annotation.
arXiv Detail & Related papers (2021-09-09T19:25:48Z) - Self-supervised Segmentation via Background Inpainting [96.10971980098196]
We introduce a self-supervised detection and segmentation approach that can work with single images captured by a potentially moving camera.
We exploit a self-supervised loss function that we exploit to train a proposal-based segmentation network.
We apply our method to human detection and segmentation in images that visually depart from those of standard benchmarks and outperform existing self-supervised methods.
arXiv Detail & Related papers (2020-11-11T08:34:40Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z) - DASGIL: Domain Adaptation for Semantic and Geometric-aware Image-based
Localization [27.294822556484345]
Long-term visual localization under changing environments is a challenging problem in autonomous driving and mobile robotics.
We propose a novel multi-task architecture to fuse the geometric and semantic information into the multi-scale latent embedding representation for visual place recognition.
arXiv Detail & Related papers (2020-10-01T17:44:25Z) - POMP: Pomcp-based Online Motion Planning for active visual search in
indoor environments [89.43830036483901]
We focus on the problem of learning an optimal policy for Active Visual Search (AVS) of objects in known indoor environments with an online setup.
Our POMP method uses as input the current pose of an agent and a RGB-D frame.
We validate our method on the publicly available AVD benchmark, achieving an average success rate of 0.76 with an average path length of 17.1.
arXiv Detail & Related papers (2020-09-17T08:23:50Z) - Domain-invariant Similarity Activation Map Contrastive Learning for
Retrieval-based Long-term Visual Localization [30.203072945001136]
In this work, a general architecture is first formulated probabilistically to extract domain invariant feature through multi-domain image translation.
And then a novel gradient-weighted similarity activation mapping loss (Grad-SAM) is incorporated for finer localization with high accuracy.
Extensive experiments have been conducted to validate the effectiveness of the proposed approach on the CMUSeasons dataset.
Our performance is on par with or even outperforms the state-of-the-art image-based localization baselines in medium or high precision.
arXiv Detail & Related papers (2020-09-16T14:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.