Graph-based Proprioceptive Localization Using a Discrete Heading-Length
Feature Sequence Matching Approach
- URL: http://arxiv.org/abs/2005.13704v1
- Date: Wed, 27 May 2020 23:10:15 GMT
- Title: Graph-based Proprioceptive Localization Using a Discrete Heading-Length
Feature Sequence Matching Approach
- Authors: Hsin-Min Cheng and Dezhen Song
- Abstract summary: Proprioceptive localization refers to a new class of robot egocentric localization methods.
These methods are naturally immune to bad weather, poor lighting conditions, or other extreme environmental conditions.
We provide a low cost fallback solution for localization under challenging environmental conditions.
- Score: 14.356113113268389
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Proprioceptive localization refers to a new class of robot egocentric
localization methods that do not rely on the perception and recognition of
external landmarks. These methods are naturally immune to bad weather, poor
lighting conditions, or other extreme environmental conditions that may hinder
exteroceptive sensors such as a camera or a laser ranger finder. These methods
depend on proprioceptive sensors such as inertial measurement units (IMUs)
and/or wheel encoders. Assisted by magnetoreception, the sensors can provide a
rudimentary estimation of vehicle trajectory which is used to query a prior
known map to obtain location. Named as graph-based proprioceptive localization
(GBPL), we provide a low cost fallback solution for localization under
challenging environmental conditions. As a robot/vehicle travels, we extract a
sequence of heading-length values for straight segments from the trajectory and
match the sequence with a pre-processed heading-length graph (HLG) abstracted
from the prior known map to localize the robot under a graph-matching approach.
Using the information from HLG, our location alignment and verification module
compensates for trajectory drift, wheel slip, or tire inflation level. We have
implemented our algorithm and tested it in both simulated and physical
experiments. The algorithm runs successfully in finding robot location
continuously and achieves localization accurate at the level that the prior map
allows (less than 10m).
Related papers
- PRISM-TopoMap: Online Topological Mapping with Place Recognition and Scan Matching [42.74395278382559]
This paper introduces PRISM-TopoMap -- a topological mapping method that maintains a graph of locally aligned locations.
The proposed method involves learnable multimodal place recognition paired with the scan matching pipeline for localization and loop closure.
We conduct a broad experimental evaluation of the suggested approach in a range of photo-realistic environments and on a real robot.
arXiv Detail & Related papers (2024-04-02T06:25:16Z) - Deep Homography Estimation for Visual Place Recognition [49.235432979736395]
We propose a transformer-based deep homography estimation (DHE) network.
It takes the dense feature map extracted by a backbone network as input and fits homography for fast and learnable geometric verification.
Experiments on benchmark datasets show that our method can outperform several state-of-the-art methods.
arXiv Detail & Related papers (2024-02-25T13:22:17Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Energy-Based Models for Cross-Modal Localization using Convolutional
Transformers [52.27061799824835]
We present a novel framework for localizing a ground vehicle mounted with a range sensor against satellite imagery in the absence of GPS.
We propose a method using convolutional transformers that performs accurate metric-level localization in a cross-modal manner.
We train our model end-to-end and demonstrate our approach achieving higher accuracy than the state-of-the-art on KITTI, Pandaset, and a custom dataset.
arXiv Detail & Related papers (2023-06-06T21:27:08Z) - Point Cloud Forecasting as a Proxy for 4D Occupancy Forecasting [58.45661235893729]
One promising self-supervised task is 3D point cloud forecasting from unannotated LiDAR sequences.
We show that this task requires algorithms to implicitly capture (1) sensor extrinsics (i.e., the egomotion of the autonomous vehicle), (2) sensor intrinsics (i.e., the sampling pattern specific to the particular LiDAR sensor), and (3) the shape and motion of other objects in the scene.
We render point cloud data from 4D occupancy predictions given sensor extrinsics and intrinsics, allowing one to train and test occupancy algorithms with unannotated LiDAR sequences.
arXiv Detail & Related papers (2023-02-25T18:12:37Z) - HPointLoc: Point-based Indoor Place Recognition using Synthetic RGB-D
Images [58.720142291102135]
We present a novel dataset named as HPointLoc, specially designed for exploring capabilities of visual place recognition in indoor environment.
The dataset is based on the popular Habitat simulator, in which it is possible to generate indoor scenes using both own sensor data and open datasets.
arXiv Detail & Related papers (2022-12-30T12:20:56Z) - Visual Cross-View Metric Localization with Dense Uncertainty Estimates [11.76638109321532]
This work addresses visual cross-view metric localization for outdoor robotics.
Given a ground-level color image and a satellite patch that contains the local surroundings, the task is to identify the location of the ground camera within the satellite patch.
We devise a novel network architecture with denser satellite descriptors, similarity matching at the bottleneck, and a dense spatial distribution as output to capture multi-modal localization ambiguities.
arXiv Detail & Related papers (2022-08-17T20:12:23Z) - Robot Localization and Navigation through Predictive Processing using
LiDAR [0.0]
We show a proof-of-concept of the predictive processing-inspired approach to perception applied for localization and navigation using laser sensors.
We learn the generative model of the laser through self-supervised learning and perform both online state-estimation and navigation.
Results showed improved state-estimation performance when comparing to a state-of-the-art particle filter in the absence of odometry.
arXiv Detail & Related papers (2021-09-09T09:58:00Z) - OctoPath: An OcTree Based Self-Supervised Learning Approach to Local
Trajectory Planning for Mobile Robots [0.0]
We introduce OctoPath, which is an encoder-decoder deep neural network, trained in a self-supervised manner to predict the local optimal trajectory for the ego-vehicle.
During training, OctoPath minimizes the error between the predicted and the manually driven trajectories in a given training dataset.
We evaluate the predictions of OctoPath in different driving scenarios, both indoor and outdoor, while benchmarking our system against a baseline hybrid A-Star algorithm.
arXiv Detail & Related papers (2021-06-02T07:10:54Z) - LatentSLAM: unsupervised multi-sensor representation learning for
localization and mapping [7.857987850592964]
We propose an unsupervised representation learning method that yields low-dimensional latent state descriptors.
Our method is sensor agnostic and can be applied to any sensor modality.
We show how combining multiple sensors can increase the robustness, by reducing the number of false matches.
arXiv Detail & Related papers (2021-05-07T13:44:32Z) - Rethinking Localization Map: Towards Accurate Object Perception with
Self-Enhancement Maps [78.2581910688094]
This work introduces a novel self-enhancement method to harvest accurate object localization maps and object boundaries with only category labels as supervision.
In particular, the proposed Self-Enhancement Maps achieve the state-of-the-art localization accuracy of 54.88% on ILSVRC.
arXiv Detail & Related papers (2020-06-09T12:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.