Localization of Autonomous Vehicles: Proof of Concept for A Computer
Vision Approach
- URL: http://arxiv.org/abs/2104.02785v1
- Date: Tue, 6 Apr 2021 21:09:47 GMT
- Title: Localization of Autonomous Vehicles: Proof of Concept for A Computer
Vision Approach
- Authors: Sara Zahedian, Kaveh Farokhi Sadabadi, Amir Nohekhan
- Abstract summary: This paper introduces a visual-based localization method for autonomous vehicles (AVs) that operate in the absence of any complicated hardware system but a single camera.
The proposed system is tested on the KITTI dataset and has shown an average accuracy of 2 meters in finding the final location of the vehicle.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a visual-based localization method for autonomous
vehicles (AVs) that operate in the absence of any complicated hardware system
but a single camera. Visual localization refers to techniques that aim to find
the location of an object based on visual information of its surrounding area.
The problem of localization has been of interest for many years. However,
visual localization is a relatively new subject in the literature of
transportation. Moreover, the inevitable application of this type of
localization in the context of autonomous vehicles demands special attention
from the transportation community to this problem. This study proposes a
two-step localization method that requires a database of geotagged images and a
camera mounted on a vehicle that can take pictures while the car is moving. The
first step which is image retrieval uses SIFT local feature descriptor to find
an initial location for the vehicle using image matching. The next step is to
utilize the Kalman filter to estimate a more accurate location for the vehicle
as it is moving. All stages of the introduced method are implemented as a
complete system using different Python libraries. The proposed system is tested
on the KITTI dataset and has shown an average accuracy of 2 meters in finding
the final location of the vehicle.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - A Survey on Visual Map Localization Using LiDARs and Cameras [0.0]
We define visual map localization as a two-stage process.
At the stage of place recognition, the initial position of the vehicle in the map is determined by comparing the visual sensor output with a set of geo-tagged map regions of interest.
At the stage of map metric localization, the vehicle is tracked while it moves across the map by continuously aligning the visual sensors' output with the current area of the map that is being traversed.
arXiv Detail & Related papers (2022-08-05T20:11:18Z) - Satellite Image Based Cross-view Localization for Autonomous Vehicle [59.72040418584396]
This paper shows that by using an off-the-shelf high-definition satellite image as a ready-to-use map, we are able to achieve cross-view vehicle localization up to a satisfactory accuracy.
Our method is validated on KITTI and Ford Multi-AV Seasonal datasets as ground view and Google Maps as the satellite view.
arXiv Detail & Related papers (2022-07-27T13:16:39Z) - Deep Dense Local Feature Matching and Vehicle Removal for Indoor Visual
Localization [0.0]
We propose a visual localization framework that robustly finds the match for a query among the images collected from indoor parking lots.
We employ a deep dense local feature matching that resembles human perception to find correspondences.
Our method achieves 86.9 percent accuracy, outperforming the alternatives.
arXiv Detail & Related papers (2022-05-25T07:32:37Z) - Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization
Using Satellite Image [91.29546868637911]
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization.
Experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method.
arXiv Detail & Related papers (2022-04-10T19:16:58Z) - Semantic Image Alignment for Vehicle Localization [111.59616433224662]
We present a novel approach to vehicle localization in dense semantic maps using semantic segmentation from a monocular camera.
In contrast to existing visual localization approaches, the system does not require additional keypoint features, handcrafted localization landmark extractors or expensive LiDAR sensors.
arXiv Detail & Related papers (2021-10-08T14:40:15Z) - Real-time Geo-localization Using Satellite Imagery and Topography for
Unmanned Aerial Vehicles [18.71806336611299]
We propose a framework that is reliable in changing scenes and pragmatic for lightweight embedded systems on UAVs.
The framework is comprised of two stages: offline database preparation and online inference.
We present field experiments of image-based localization on two different UAV platforms to validate our results.
arXiv Detail & Related papers (2021-08-07T01:47:19Z) - Coarse-to-fine Semantic Localization with HD Map for Autonomous Driving
in Structural Scenes [1.1024591739346292]
We propose a cost-effective vehicle localization system with HD map for autonomous driving using cameras as primary sensors.
We formulate vision-based localization as a data association problem that maps visual semantics to landmarks in HD map.
We evaluate our method on two datasets and demonstrate that the proposed approach yields promising localization results in different driving scenarios.
arXiv Detail & Related papers (2021-07-06T11:58:55Z) - Connecting Language and Vision for Natural Language-Based Vehicle
Retrieval [77.88818029640977]
In this paper, we apply one new modality, i.e., the language description, to search the vehicle of interest.
To connect language and vision, we propose to jointly train the state-of-the-art vision models with the transformer-based language model.
Our proposed method has achieved the 1st place on the 5th AI City Challenge, yielding competitive performance 18.69% MRR accuracy.
arXiv Detail & Related papers (2021-05-31T11:42:03Z) - Visual Localization for Autonomous Driving: Mapping the Accurate
Location in the City Maze [16.824901952766446]
We propose a novel feature voting technique for visual localization.
In our work, we craft the proposed feature voting method into three state-of-the-art visual localization networks.
Our approach can predict location robustly even in challenging inner-city settings.
arXiv Detail & Related papers (2020-08-13T03:59:34Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.