Geometric Transformer for Fast and Robust Point Cloud Registration
- URL: http://arxiv.org/abs/2202.06688v1
- Date: Mon, 14 Feb 2022 13:26:09 GMT
- Title: Geometric Transformer for Fast and Robust Point Cloud Registration
- Authors: Zheng Qin, Hao Yu, Changjian Wang, Yulan Guo, Yuxing Peng and Kai Xu
- Abstract summary: We study the problem of extracting accurate correspondences for point cloud registration.
Recent keypoint-free methods bypass the detection of repeatable keypoints which is difficult in low-overlap scenarios.
We propose Geometric Transformer to learn geometric feature for robust superpoint matching.
- Score: 53.10568889775553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of extracting accurate correspondences for point cloud
registration. Recent keypoint-free methods bypass the detection of repeatable
keypoints which is difficult in low-overlap scenarios, showing great potential
in registration. They seek correspondences over downsampled superpoints, which
are then propagated to dense points. Superpoints are matched based on whether
their neighboring patches overlap. Such sparse and loose matching requires
contextual features capturing the geometric structure of the point clouds. We
propose Geometric Transformer to learn geometric feature for robust superpoint
matching. It encodes pair-wise distances and triplet-wise angles, making it
robust in low-overlap cases and invariant to rigid transformation. The
simplistic design attains surprisingly high matching accuracy such that no
RANSAC is required in the estimation of alignment transformation, leading to
$100$ times acceleration. Our method improves the inlier ratio by
17\%$\sim$30\% and the registration recall by over 7\% on the challenging
3DLoMatch benchmark. The code and models will be released at
\url{https://github.com/qinzheng93/GeoTransformer}.
Related papers
- 2D3D-MATR: 2D-3D Matching Transformer for Detection-free Registration
between Images and Point Clouds [38.425876064671435]
We propose 2D3D-MATR, a detection-free method for accurate and robust registration between images and point clouds.
Our method adopts a coarse-to-fine pipeline where it first computes coarse correspondences between downsampled patches of the input image and the point cloud.
To resolve the scale ambiguity in patch matching, we construct a multi-scale pyramid for each image patch and learn to find for each point patch the best matching image patch at a proper resolution level.
arXiv Detail & Related papers (2023-08-10T16:10:54Z) - GeoTransformer: Fast and Robust Point Cloud Registration with Geometric
Transformer [63.85771838683657]
We study the problem of extracting accurate correspondences for point cloud registration.
Recent keypoint-free methods have shown great potential through bypassing the detection of repeatable keypoints.
We propose Geometric Transformer, or GeoTransformer for short, to learn geometric feature for robust superpoint matching.
arXiv Detail & Related papers (2023-07-25T02:36:04Z) - Robust Point Cloud Registration Framework Based on Deep Graph
Matching(TPAMI Version) [13.286247750893681]
3D point cloud registration is a fundamental problem in computer vision and robotics.
We propose a novel deep graph matching-based framework for point cloud registration.
arXiv Detail & Related papers (2022-11-09T06:05:25Z) - Stratified Transformer for 3D Point Cloud Segmentation [89.9698499437732]
Stratified Transformer is able to capture long-range contexts and demonstrates strong generalization ability and high performance.
To combat the challenges posed by irregular point arrangements, we propose first-layer point embedding to aggregate local information.
Experiments demonstrate the effectiveness and superiority of our method on S3DIS, ScanNetv2 and ShapeNetPart datasets.
arXiv Detail & Related papers (2022-03-28T05:35:16Z) - Robust Partial-to-Partial Point Cloud Registration in a Full Range [12.86951061306046]
We propose Graph Matching Consensus Network (GMCNet), which estimates pose-invariant correspondences for fullrange 1 Partial-to-Partial point cloud Registration (PPR)
GMCNet encodes point descriptors for each point cloud individually without using crosscontextual information, or ground truth correspondences for training.
arXiv Detail & Related papers (2021-11-30T17:56:24Z) - DeepI2P: Image-to-Point Cloud Registration via Deep Classification [71.3121124994105]
DeepI2P is a novel approach for cross-modality registration between an image and a point cloud.
Our method estimates the relative rigid transformation between the coordinate frames of the camera and Lidar.
We circumvent the difficulty by converting the registration problem into a classification and inverse camera projection optimization problem.
arXiv Detail & Related papers (2021-04-08T04:27:32Z) - R-PointHop: A Green, Accurate and Unsupervised Point Cloud Registration
Method [64.86292006892093]
An unsupervised 3D point cloud registration method, called R-PointHop, is proposed in this work.
Experiments are conducted on the ModelNet40 and the Stanford Bunny dataset, which demonstrate the effectiveness of R-PointHop on the 3D point cloud registration task.
arXiv Detail & Related papers (2021-03-15T04:12:44Z) - Robust Point Cloud Registration Framework Based on Deep Graph Matching [5.865029600972316]
3D point cloud registration is a fundamental problem in computer vision and robotics.
We propose a novel deep graph matchingbased framework for point cloud registration.
arXiv Detail & Related papers (2021-03-07T04:20:29Z) - RPM-Net: Robust Point Matching using Learned Features [79.52112840465558]
RPM-Net is a less sensitive and more robust deep learning-based approach for rigid point cloud registration.
Unlike some existing methods, our RPM-Net handles missing correspondences and point clouds with partial visibility.
arXiv Detail & Related papers (2020-03-30T13:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.