KDD-LOAM: Jointly Learned Keypoint Detector and Descriptors Assisted
LiDAR Odometry and Mapping
- URL: http://arxiv.org/abs/2309.15394v1
- Date: Wed, 27 Sep 2023 04:10:52 GMT
- Title: KDD-LOAM: Jointly Learned Keypoint Detector and Descriptors Assisted
LiDAR Odometry and Mapping
- Authors: Renlang Huang, Minglei Zhao, Jiming Chen, and Liang Li
- Abstract summary: We propose a tightly coupled keypoint detector and descriptor based on a multi-task fully convolutional network with a probabilistic detection loss.
Experiments on both indoor and outdoor datasets show that our TCKDD achieves state-of-the-art performance in point cloud registration.
We also design a keypoint detector and descriptors-assisted LiDAR odometry and mapping framework (KDD-LOAM), whose real-time odometry relies on keypoint descriptor matching-based RANSAC.
- Score: 9.609585217048664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse keypoint matching based on distinct 3D feature representations can
improve the efficiency and robustness of point cloud registration. Existing
learning-based 3D descriptors and keypoint detectors are either independent or
loosely coupled, so they cannot fully adapt to each other. In this work, we
propose a tightly coupled keypoint detector and descriptor (TCKDD) based on a
multi-task fully convolutional network with a probabilistic detection loss. In
particular, this self-supervised detection loss fully adapts the keypoint
detector to any jointly learned descriptors and benefits the self-supervised
learning of descriptors. Extensive experiments on both indoor and outdoor
datasets show that our TCKDD achieves state-of-the-art performance in point
cloud registration. Furthermore, we design a keypoint detector and
descriptors-assisted LiDAR odometry and mapping framework (KDD-LOAM), whose
real-time odometry relies on keypoint descriptor matching-based RANSAC. The
sparse keypoints are further used for efficient scan-to-map registration and
mapping. Experiments on KITTI dataset demonstrate that KDD-LOAM significantly
surpasses LOAM and shows competitive performance in odometry.
Related papers
- D3Former: Jointly Learning Repeatable Dense Detectors and
Feature-enhanced Descriptors via Saliency-guided Transformer [14.056531181678467]
We introduce a saliency-guided transtextbfformer, referred to as textitD3Former, which entails the joint learning of repeatable textbfDetectors and feature-enhanced textbfDescriptors.
Our proposed method consistently outperforms state-of-the-art point cloud matching methods.
arXiv Detail & Related papers (2023-12-20T12:19:17Z) - Improving the matching of deformable objects by learning to detect
keypoints [6.4587163310833855]
We propose a novel learned keypoint detection method to increase the number of correct matches for the task of non-rigid image correspondence.
We train an end-to-end convolutional neural network (CNN) to find keypoint locations that are more appropriate to the considered descriptor.
Experiments demonstrate that our method enhances the Mean Matching Accuracy of numerous descriptors when used in conjunction with our detection method.
We also apply our method on the complex real-world task object retrieval where our detector performs on par with the finest keypoint detectors currently available for this task.
arXiv Detail & Related papers (2023-09-01T13:02:19Z) - Learning Feature Matching via Matchable Keypoint-Assisted Graph Neural
Network [52.29330138835208]
Accurately matching local features between a pair of images is a challenging computer vision task.
Previous studies typically use attention based graph neural networks (GNNs) with fully-connected graphs over keypoints within/across images.
We propose MaKeGNN, a sparse attention-based GNN architecture which bypasses non-repeatable keypoints and leverages matchable ones to guide message passing.
arXiv Detail & Related papers (2023-07-04T02:50:44Z) - 3DMODT: Attention-Guided Affinities for Joint Detection & Tracking in 3D
Point Clouds [95.54285993019843]
We propose a method for joint detection and tracking of multiple objects in 3D point clouds.
Our model exploits temporal information employing multiple frames to detect objects and track them in a single network.
arXiv Detail & Related papers (2022-11-01T20:59:38Z) - SASA: Semantics-Augmented Set Abstraction for Point-based 3D Object
Detection [78.90102636266276]
We propose a novel set abstraction method named Semantics-Augmented Set Abstraction (SASA)
Based on the estimated point-wise foreground scores, we then propose a semantics-guided point sampling algorithm to help retain more important foreground points during down-sampling.
In practice, SASA shows to be effective in identifying valuable points related to foreground objects and improving feature learning for point-based 3D detection.
arXiv Detail & Related papers (2022-01-06T08:54:47Z) - UPDesc: Unsupervised Point Descriptor Learning for Robust Registration [54.95201961399334]
UPDesc is an unsupervised method to learn point descriptors for robust point cloud registration.
We show that our learned descriptors yield superior performance over existing unsupervised methods.
arXiv Detail & Related papers (2021-08-05T17:11:08Z) - HDD-Net: Hybrid Detector Descriptor with Mutual Interactive Learning [24.13425816781179]
Local feature extraction remains an active research area due to the advances in fields such as SLAM, 3D reconstructions, or AR applications.
We propose a method that treats both extractions independently and focuses on their interaction in the learning process.
We show improvements over the state of the art in terms of image matching on HPatches and 3D reconstruction quality while keeping on par on camera localisation tasks.
arXiv Detail & Related papers (2020-05-12T13:55:04Z) - D3Feat: Joint Learning of Dense Detection and Description of 3D Local
Features [51.04841465193678]
We leverage a 3D fully convolutional network for 3D point clouds.
We propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point.
Our method achieves state-of-the-art results in both indoor and outdoor scenarios.
arXiv Detail & Related papers (2020-03-06T12:51:09Z) - 3D Object Detection From LiDAR Data Using Distance Dependent Feature
Extraction [7.04185696830272]
This work proposes an improvement for 3D object detectors by taking into account the properties of LiDAR point clouds over distance.
Results show that training separate networks for close-range and long-range objects boosts performance for all KITTI benchmark difficulties.
arXiv Detail & Related papers (2020-03-02T13:16:35Z) - CAE-LO: LiDAR Odometry Leveraging Fully Unsupervised Convolutional
Auto-Encoder for Interest Point Detection and Feature Description [10.73965992177754]
We propose a fully unsupervised Conal Auto-Encoder based LiDAR Odometry (CAE-LO) that detects interest points from spherical ring data using 2D CAE and extracts features from multi-resolution voxel model using 3D CAE.
We make several key contributions: 1) experiments based on KITTI dataset show that our interest points can capture more local details to improve the matching success rate on unstructured scenarios and our features outperform state-of-the-art by more than 50% in matching inlier ratio.
arXiv Detail & Related papers (2020-01-06T01:26:28Z) - Learning and Matching Multi-View Descriptors for Registration of Point
Clouds [48.25586496457587]
We first propose a multi-view local descriptor, which is learned from the images of multiple views, for the description of 3D keypoints.
Then, we develop a robust matching approach, aiming at rejecting outlier matches based on the efficient inference.
We have demonstrated the boost of our approaches to registration on the public scanning and multi-view stereo datasets.
arXiv Detail & Related papers (2018-07-16T01:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.