Robust Place Recognition using an Imaging Lidar
- URL: http://arxiv.org/abs/2103.02111v1
- Date: Wed, 3 Mar 2021 01:08:31 GMT
- Title: Robust Place Recognition using an Imaging Lidar
- Authors: Tixiao Shan, Brendan Englot, Fabio Duarte, Carlo Ratti, Daniela Rus
- Abstract summary: We propose a methodology for robust, real-time place recognition using an imaging lidar.
Our method is truly-invariant and can tackle reverse revisiting and upside-down revisiting.
- Score: 45.37172889338924
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We propose a methodology for robust, real-time place recognition using an
imaging lidar, which yields image-quality high-resolution 3D point clouds.
Utilizing the intensity readings of an imaging lidar, we project the point
cloud and obtain an intensity image. ORB feature descriptors are extracted from
the image and encoded into a bag-of-words vector. The vector, used to identify
the point cloud, is inserted into a database that is maintained by DBoW for
fast place recognition queries. The returned candidate is further validated by
matching visual feature descriptors. To reject matching outliers, we apply PnP,
which minimizes the reprojection error of visual features' positions in
Euclidean space with their correspondences in 2D image space, using RANSAC.
Combining the advantages from both camera and lidar-based place recognition
approaches, our method is truly rotation-invariant and can tackle reverse
revisiting and upside-down revisiting. The proposed method is evaluated on
datasets gathered from a variety of platforms over different scales and
environments. Our implementation is available at
https://git.io/imaging-lidar-place-recognition
Related papers
- FaVoR: Features via Voxel Rendering for Camera Relocalization [23.7893950095252]
Camera relocalization methods range from dense image alignment to direct camera pose regression from a query image.
We propose a novel approach that leverages a globally sparse yet locally dense 3D representation of 2D features.
By tracking and triangulating landmarks over a sequence of frames, we construct a sparse voxel map optimized to render image patch descriptors observed during tracking.
arXiv Detail & Related papers (2024-09-11T18:58:16Z) - Breaking the Frame: Visual Place Recognition by Overlap Prediction [53.17564423756082]
We propose a novel visual place recognition approach based on overlap prediction, called VOP.
VOP proceeds co-visible image sections by obtaining patch-level embeddings using a Vision Transformer backbone.
Our approach uses a voting mechanism to assess overlap scores for potential database images.
arXiv Detail & Related papers (2024-06-23T20:00:20Z) - CheckerPose: Progressive Dense Keypoint Localization for Object Pose
Estimation with Graph Neural Network [66.24726878647543]
Estimating the 6-DoF pose of a rigid object from a single RGB image is a crucial yet challenging task.
Recent studies have shown the great potential of dense correspondence-based solutions.
We propose a novel pose estimation algorithm named CheckerPose, which improves on three main aspects.
arXiv Detail & Related papers (2023-03-29T17:30:53Z) - BEVPlace: Learning LiDAR-based Place Recognition using Bird's Eye View
Images [20.30997801125592]
We explore the potential of a different representation in place recognition, i.e. bird's eye view (BEV) images.
A simple VGGNet trained on BEV images achieves comparable performance with the state-of-the-art place recognition methods in scenes of slight viewpoint changes.
We develop a method to estimate the position of the query cloud, extending the usage of place recognition.
arXiv Detail & Related papers (2023-02-28T05:37:45Z) - HPointLoc: Point-based Indoor Place Recognition using Synthetic RGB-D
Images [58.720142291102135]
We present a novel dataset named as HPointLoc, specially designed for exploring capabilities of visual place recognition in indoor environment.
The dataset is based on the popular Habitat simulator, in which it is possible to generate indoor scenes using both own sensor data and open datasets.
arXiv Detail & Related papers (2022-12-30T12:20:56Z) - SSC: Semantic Scan Context for Large-Scale Place Recognition [13.228580954956342]
We explore the use of high-level features, namely semantics, to improve the representation ability of descriptors.
We propose a novel global descriptor, Semantic Scan Context, which explores semantic information to represent scenes more effectively.
Our approach outperforms the state-of-the-art methods with a large margin.
arXiv Detail & Related papers (2021-07-01T11:51:19Z) - DeepI2P: Image-to-Point Cloud Registration via Deep Classification [71.3121124994105]
DeepI2P is a novel approach for cross-modality registration between an image and a point cloud.
Our method estimates the relative rigid transformation between the coordinate frames of the camera and Lidar.
We circumvent the difficulty by converting the registration problem into a classification and inverse camera projection optimization problem.
arXiv Detail & Related papers (2021-04-08T04:27:32Z) - Unconstrained Matching of 2D and 3D Descriptors for 6-DOF Pose
Estimation [44.66818851668686]
We generate a dataset of matching 2D and 3D points and their corresponding feature descriptors.
To localize the pose of an image at test time, we extract keypoints and feature descriptors from the query image.
The locations of the matched features are used in a robust pose estimation algorithm to predict the location and orientation of the query image.
arXiv Detail & Related papers (2020-05-29T11:17:32Z) - Learning and Matching Multi-View Descriptors for Registration of Point
Clouds [48.25586496457587]
We first propose a multi-view local descriptor, which is learned from the images of multiple views, for the description of 3D keypoints.
Then, we develop a robust matching approach, aiming at rejecting outlier matches based on the efficient inference.
We have demonstrated the boost of our approaches to registration on the public scanning and multi-view stereo datasets.
arXiv Detail & Related papers (2018-07-16T01:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.