Deep Dense Local Feature Matching and Vehicle Removal for Indoor Visual
Localization
- URL: http://arxiv.org/abs/2205.12544v1
- Date: Wed, 25 May 2022 07:32:37 GMT
- Title: Deep Dense Local Feature Matching and Vehicle Removal for Indoor Visual
Localization
- Authors: Kyung Ho Park
- Abstract summary: We propose a visual localization framework that robustly finds the match for a query among the images collected from indoor parking lots.
We employ a deep dense local feature matching that resembles human perception to find correspondences.
Our method achieves 86.9 percent accuracy, outperforming the alternatives.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual localization is an essential component of intelligent transportation
systems, enabling broad applications that require understanding one's self
location when other sensors are not available. It is mostly tackled by image
retrieval such that the location of a query image is determined by its closest
match in the previously collected images. Existing approaches focus on large
scale localization where landmarks are helpful in finding the location.
However, visual localization becomes challenging in small scale environments
where objects are hardly recognizable. In this paper, we propose a visual
localization framework that robustly finds the match for a query among the
images collected from indoor parking lots. It is a challenging problem when the
vehicles in the images share similar appearances and are frequently replaced
such as parking lots. We propose to employ a deep dense local feature matching
that resembles human perception to find correspondences and eliminating matches
from vehicles automatically with a vehicle detector. The proposed solution is
robust to the scenes with low textures and invariant to false matches caused by
vehicles. We compare our framework with alternatives to validate our
superiority on a benchmark dataset containing 267 pre-collected images and 99
query images taken from 34 sections of a parking lot. Our method achieves 86.9
percent accuracy, outperforming the alternatives.
Related papers
- Breaking the Frame: Visual Place Recognition by Overlap Prediction [53.17564423756082]
We propose a novel visual place recognition approach based on overlap prediction, called VOP.
VOP proceeds co-visible image sections by obtaining patch-level embeddings using a Vision Transformer backbone.
Our approach uses a voting mechanism to assess overlap scores for potential database images.
arXiv Detail & Related papers (2024-06-23T20:00:20Z) - DisPlacing Objects: Improving Dynamic Vehicle Detection via Visual Place
Recognition under Adverse Conditions [29.828201168816243]
We investigate whether a prior map can be leveraged to aid in the detection of dynamic objects in a scene without the need for a 3D map.
We contribute an algorithm which refines an initial set of candidate object detections and produces a refined subset of highly accurate detections using a prior map.
arXiv Detail & Related papers (2023-06-30T10:46:51Z) - View-Invariant Localization using Semantic Objects in Changing
Environments [42.552452681642364]
This paper proposes a novel framework for real-time localization and egomotion tracking of a vehicle in a reference map.
The core idea is to map the semantic objects observed by the vehicle and register them to their corresponding objects in the reference map.
arXiv Detail & Related papers (2022-09-28T21:26:38Z) - Image-to-Image Translation for Autonomous Driving from Coarsely-Aligned
Image Pairs [57.33431586417377]
A self-driving car must be able to handle adverse weather conditions to operate safely.
In this paper, we investigate the idea of turning sensor inputs captured in an adverse condition into a benign one.
We show that our coarsely-aligned training scheme leads to a better image translation quality and improved downstream tasks.
arXiv Detail & Related papers (2022-09-23T16:03:18Z) - Satellite Image Based Cross-view Localization for Autonomous Vehicle [59.72040418584396]
This paper shows that by using an off-the-shelf high-definition satellite image as a ready-to-use map, we are able to achieve cross-view vehicle localization up to a satisfactory accuracy.
Our method is validated on KITTI and Ford Multi-AV Seasonal datasets as ground view and Google Maps as the satellite view.
arXiv Detail & Related papers (2022-07-27T13:16:39Z) - Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization
Using Satellite Image [91.29546868637911]
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization.
Experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method.
arXiv Detail & Related papers (2022-04-10T19:16:58Z) - Semantic Image Alignment for Vehicle Localization [111.59616433224662]
We present a novel approach to vehicle localization in dense semantic maps using semantic segmentation from a monocular camera.
In contrast to existing visual localization approaches, the system does not require additional keypoint features, handcrafted localization landmark extractors or expensive LiDAR sensors.
arXiv Detail & Related papers (2021-10-08T14:40:15Z) - Localization of Autonomous Vehicles: Proof of Concept for A Computer
Vision Approach [0.0]
This paper introduces a visual-based localization method for autonomous vehicles (AVs) that operate in the absence of any complicated hardware system but a single camera.
The proposed system is tested on the KITTI dataset and has shown an average accuracy of 2 meters in finding the final location of the vehicle.
arXiv Detail & Related papers (2021-04-06T21:09:47Z) - What is the Best Grid-Map for Self-Driving Cars Localization? An
Evaluation under Diverse Types of Illumination, Traffic, and Environment [10.64191129882262]
localization of self-driving cars is needed for several tasks such as keeping maps updated, tracking objects, and planning.
Since maintaining and using several maps is computationally expensive, it is important to analyze which type of map is more adequate for each application.
In this work, we provide data for such analysis by comparing the accuracy of a particle filter localization when using occupancy, reflectivity, color, or semantic grid maps.
arXiv Detail & Related papers (2020-09-19T22:02:44Z) - Geometrically Mappable Image Features [85.81073893916414]
Vision-based localization of an agent in a map is an important problem in robotics and computer vision.
We propose a method that learns image features targeted for image-retrieval-based localization.
arXiv Detail & Related papers (2020-03-21T15:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.