Location retrieval using visible landmarks based qualitative place
signatures
- URL: http://arxiv.org/abs/2208.00783v1
- Date: Tue, 26 Jul 2022 13:57:49 GMT
- Title: Location retrieval using visible landmarks based qualitative place
signatures
- Authors: Lijun Wei, Valerie Gouet-Brunet, Anthony Cohn
- Abstract summary: A qualitative location retrieval method is proposed in this work by describing locations/places using qualitative place signatures (QPS)
After dividing the space into place cells each with individual signatures attached, a coarse-to-fine location retrieval method is proposed to efficiently identify the possible location(s) of viewers based on their qualitative observations.
- Score: 0.7119463843130092
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Location retrieval based on visual information is to retrieve the location of
an agent (e.g. human, robot) or the area they see by comparing the observations
with a certain form of representation of the environment. Existing methods
generally require precise measurement and storage of the observed environment
features, which may not always be robust due to the change of season,
viewpoint, occlusion, etc. They are also challenging to scale up and may not be
applicable for humans due to the lack of measuring/imaging devices. Considering
that humans often use less precise but easily produced qualitative spatial
language and high-level semantic landmarks when describing an environment, a
qualitative location retrieval method is proposed in this work by describing
locations/places using qualitative place signatures (QPS), defined as the
perceived spatial relations between ordered pairs of co-visible landmarks from
viewers' perspective. After dividing the space into place cells each with
individual signatures attached, a coarse-to-fine location retrieval method is
proposed to efficiently identify the possible location(s) of viewers based on
their qualitative observations. The usability and effectiveness of the proposed
method were evaluated using openly available landmark datasets, together with
simulated observations by considering the possible perception error.
Related papers
- Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - Mapping High-level Semantic Regions in Indoor Environments without
Object Recognition [50.624970503498226]
The present work proposes a method for semantic region mapping via embodied navigation in indoor environments.
To enable region identification, the method uses a vision-to-language model to provide scene information for mapping.
By projecting egocentric scene understanding into the global frame, the proposed method generates a semantic map as a distribution over possible region labels at each location.
arXiv Detail & Related papers (2024-03-11T18:09:50Z) - LoCUS: Learning Multiscale 3D-consistent Features from Posed Images [18.648772607057175]
We train a versatile neural representation without supervision.
We find that it is possible to balance retrieval and reusability by constructing a retrieval set carefully.
We show results creating sparse, multi-scale, semantic spatial maps.
arXiv Detail & Related papers (2023-10-02T11:11:23Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - Robust Self-Tuning Data Association for Geo-Referencing Using Lane Markings [44.4879068879732]
This paper presents a complete pipeline for resolving ambiguities during the data association.
Its core is a robust self-tuning data association that adapts the search area depending on the entropy of the measurements.
We evaluate our method on real data from urban and rural scenarios around the city of Karlsruhe in Germany.
arXiv Detail & Related papers (2022-07-28T12:29:39Z) - Real-time Outdoor Localization Using Radio Maps: A Deep Learning
Approach [59.17191114000146]
LocUNet: A convolutional, end-to-end trained neural network (NN) for the localization task.
We show that LocUNet can localize users with state-of-the-art accuracy and enjoys high robustness to inaccuracies in the estimations of radio maps.
arXiv Detail & Related papers (2021-06-23T17:27:04Z) - Ordinal UNLOC: Target Localization with Noisy and Incomplete Distance
Measures [1.6836876499886007]
A main challenge in target localization arises from the lack of reliable distance measures.
We develop a new computational framework to estimate the location of a target without the need for reliable distance measures.
arXiv Detail & Related papers (2021-05-06T13:54:31Z) - Content-Based Detection of Temporal Metadata Manipulation [91.34308819261905]
We propose an end-to-end approach to verify whether the purported time of capture of an image is consistent with its content and geographic location.
The central idea is the use of supervised consistency verification, in which we predict the probability that the image content, capture time, and geographical location are consistent.
Our approach improves upon previous work on a large benchmark dataset, increasing the classification accuracy from 59.03% to 81.07%.
arXiv Detail & Related papers (2021-03-08T13:16:19Z) - SIRI: Spatial Relation Induced Network For Spatial Description
Resolution [64.38872296406211]
We propose a novel relationship induced (SIRI) network for language-guided localization.
We show that our method is around 24% better than the state-of-the-art method in terms of accuracy, measured by an 80-pixel radius.
Our method also generalizes well on our proposed extended dataset collected using the same settings as Touchdown.
arXiv Detail & Related papers (2020-10-27T14:04:05Z) - Semantic Signatures for Large-scale Visual Localization [2.9542356825059715]
This work explores a different path by utilizing high-level semantic information.
It is found that object information in a street view can facilitate localization.
Several metrics and protocols are proposed for signature comparison and retrieval.
arXiv Detail & Related papers (2020-05-07T11:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.