PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency
- URL: http://arxiv.org/abs/2103.05465v1
- Date: Tue, 9 Mar 2021 14:56:08 GMT
- Title: PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency
- Authors: Xuyang Bai, Zixin Luo, Lei Zhou, Hongkai Chen, Lei Li, Zeyu Hu, Hongbo
Fu, Chiew-Lan Tai
- Abstract summary: We present PointDSC, a novel deep neural network that explicitly incorporates spatial consistency for pruning outlier correspondences.
Our method outperforms the state-of-the-art hand-crafted and learning-based outlier rejection approaches on several real-world datasets.
- Score: 38.93610732090426
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Removing outlier correspondences is one of the critical steps for successful
feature-based point cloud registration. Despite the increasing popularity of
introducing deep learning methods in this field, spatial consistency, which is
essentially established by a Euclidean transformation between point clouds, has
received almost no individual attention in existing learning frameworks. In
this paper, we present PointDSC, a novel deep neural network that explicitly
incorporates spatial consistency for pruning outlier correspondences. First, we
propose a nonlocal feature aggregation module, weighted by both feature and
spatial coherence, for feature embedding of the input correspondences. Second,
we formulate a differentiable spectral matching module, supervised by pairwise
spatial compatibility, to estimate the inlier confidence of each correspondence
from the embedded features. With modest computation cost, our method
outperforms the state-of-the-art hand-crafted and learning-based outlier
rejection approaches on several real-world datasets by a significant margin. We
also show its wide applicability by combining PointDSC with different 3D local
descriptors.
Related papers
- A Consistency-Aware Spot-Guided Transformer for Versatile and Hierarchical Point Cloud Registration [9.609585217048664]
We develop a consistency-aware spot-guided Transformer (CAST)
CAST incorporates a spot-guided cross-attention module to avoid interfering with irrelevant areas.
A lightweight fine matching module for both sparse keypoints and dense features can estimate the transformation accurately.
arXiv Detail & Related papers (2024-10-14T08:48:25Z) - D3Former: Jointly Learning Repeatable Dense Detectors and
Feature-enhanced Descriptors via Saliency-guided Transformer [14.056531181678467]
We introduce a saliency-guided transtextbfformer, referred to as textitD3Former, which entails the joint learning of repeatable textbfDetectors and feature-enhanced textbfDescriptors.
Our proposed method consistently outperforms state-of-the-art point cloud matching methods.
arXiv Detail & Related papers (2023-12-20T12:19:17Z) - PointCLM: A Contrastive Learning-based Framework for Multi-instance
Point Cloud Registration [4.969636478156443]
PointCLM is a contrastive learning-based framework for mutli-instance point cloud registration.
Our method outperforms the state-of-the-art methods on both synthetic and real datasets by a large margin.
arXiv Detail & Related papers (2022-09-01T04:30:05Z) - Learning to Register Unbalanced Point Pairs [10.369750912567714]
Recent 3D registration methods can effectively handle large-scale or partially overlapping point pairs.
We present a novel 3D registration method, called UPPNet, for the unbalanced point pairs.
arXiv Detail & Related papers (2022-07-09T08:03:59Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - DFC: Deep Feature Consistency for Robust Point Cloud Registration [0.4724825031148411]
We present a novel learning-based alignment network for complex alignment scenes.
We validate our approach on the 3DMatch dataset and the KITTI odometry dataset.
arXiv Detail & Related papers (2021-11-15T08:27:21Z) - Deep Hough Voting for Robust Global Registration [52.40611370293272]
We present an efficient framework for pairwise registration of real-world 3D scans, leveraging Hough voting in the 6D transformation parameter space.
Our method outperforms state-of-the-art methods on 3DMatch and 3DLoMatch benchmarks while achieving comparable performance on KITTI odometry dataset.
arXiv Detail & Related papers (2021-09-09T14:38:06Z) - UPDesc: Unsupervised Point Descriptor Learning for Robust Registration [54.95201961399334]
UPDesc is an unsupervised method to learn point descriptors for robust point cloud registration.
We show that our learned descriptors yield superior performance over existing unsupervised methods.
arXiv Detail & Related papers (2021-08-05T17:11:08Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - RPM-Net: Robust Point Matching using Learned Features [79.52112840465558]
RPM-Net is a less sensitive and more robust deep learning-based approach for rigid point cloud registration.
Unlike some existing methods, our RPM-Net handles missing correspondences and point clouds with partial visibility.
arXiv Detail & Related papers (2020-03-30T13:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.