View-Invariant Localization using Semantic Objects in Changing
Environments
- URL: http://arxiv.org/abs/2209.14426v1
- Date: Wed, 28 Sep 2022 21:26:38 GMT
- Title: View-Invariant Localization using Semantic Objects in Changing
Environments
- Authors: Jacqueline Ankenbauer, Kaveh Fathian, Jonathan P. How
- Abstract summary: This paper proposes a novel framework for real-time localization and egomotion tracking of a vehicle in a reference map.
The core idea is to map the semantic objects observed by the vehicle and register them to their corresponding objects in the reference map.
- Score: 42.552452681642364
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a novel framework for real-time localization and
egomotion tracking of a vehicle in a reference map. The core idea is to map the
semantic objects observed by the vehicle and register them to their
corresponding objects in the reference map. While several recent works have
leveraged semantic information for cross-view localization, the main
contribution of this work is a view-invariant formulation that makes the
approach directly applicable to any viewpoint configuration for which objects
are detectable. Another distinctive feature is robustness to changes in the
environment/objects due to a data association scheme suited for extreme outlier
regimes (e.g., 90% association outliers). To demonstrate our framework, we
consider an example of localizing a ground vehicle in a reference object map
using only cars as objects. While only a stereo camera is used for the ground
vehicle, we consider reference maps constructed a priori from ground viewpoints
using stereo cameras and Lidar scans, and georeferenced aerial images captured
at a different date to demonstrate the framework's robustness to different
modalities, viewpoints, and environment changes. Evaluations on the KITTI
dataset show that over a 3.7 km trajectory, localization occurs in 36 sec and
is followed by real-time egomotion tracking with an average position error of
8.5 m in a Lidar reference map, and on an aerial object map where 77% of
objects are outliers, localization is achieved in 71 sec with an average
position error of 7.9 m.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - DisPlacing Objects: Improving Dynamic Vehicle Detection via Visual Place
Recognition under Adverse Conditions [29.828201168816243]
We investigate whether a prior map can be leveraged to aid in the detection of dynamic objects in a scene without the need for a 3D map.
We contribute an algorithm which refines an initial set of candidate object detections and produces a refined subset of highly accurate detections using a prior map.
arXiv Detail & Related papers (2023-06-30T10:46:51Z) - Sparse Semantic Map-Based Monocular Localization in Traffic Scenes Using
Learned 2D-3D Point-Line Correspondences [29.419138863851526]
Given a query image, the goal is to estimate the camera pose corresponding to the prior map.
Existing approaches rely heavily on dense point descriptors at the feature level to solve the registration problem.
We propose a sparse semantic map-based monocular localization method, which solves 2D-3D registration via a well-designed deep neural network.
arXiv Detail & Related papers (2022-10-10T10:29:07Z) - Satellite Image Based Cross-view Localization for Autonomous Vehicle [59.72040418584396]
This paper shows that by using an off-the-shelf high-definition satellite image as a ready-to-use map, we are able to achieve cross-view vehicle localization up to a satisfactory accuracy.
Our method is validated on KITTI and Ford Multi-AV Seasonal datasets as ground view and Google Maps as the satellite view.
arXiv Detail & Related papers (2022-07-27T13:16:39Z) - Semantic Image Alignment for Vehicle Localization [111.59616433224662]
We present a novel approach to vehicle localization in dense semantic maps using semantic segmentation from a monocular camera.
In contrast to existing visual localization approaches, the system does not require additional keypoint features, handcrafted localization landmark extractors or expensive LiDAR sensors.
arXiv Detail & Related papers (2021-10-08T14:40:15Z) - Coarse-to-fine Semantic Localization with HD Map for Autonomous Driving
in Structural Scenes [1.1024591739346292]
We propose a cost-effective vehicle localization system with HD map for autonomous driving using cameras as primary sensors.
We formulate vision-based localization as a data association problem that maps visual semantics to landmarks in HD map.
We evaluate our method on two datasets and demonstrate that the proposed approach yields promising localization results in different driving scenarios.
arXiv Detail & Related papers (2021-07-06T11:58:55Z) - Radar-based Dynamic Occupancy Grid Mapping and Object Detection [55.74894405714851]
In recent years, the classical occupancy grid map approach has been extended to dynamic occupancy grid maps.
This paper presents the further development of a previous approach.
The data of multiple radar sensors are fused, and a grid-based object tracking and mapping method is applied.
arXiv Detail & Related papers (2020-08-09T09:26:30Z) - Rethinking Localization Map: Towards Accurate Object Perception with
Self-Enhancement Maps [78.2581910688094]
This work introduces a novel self-enhancement method to harvest accurate object localization maps and object boundaries with only category labels as supervision.
In particular, the proposed Self-Enhancement Maps achieve the state-of-the-art localization accuracy of 54.88% on ILSVRC.
arXiv Detail & Related papers (2020-06-09T12:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.