Cross-PCR: A Robust Cross-Source Point Cloud Registration Framework
- URL: http://arxiv.org/abs/2412.18873v1
- Date: Wed, 25 Dec 2024 11:14:59 GMT
- Title: Cross-PCR: A Robust Cross-Source Point Cloud Registration Framework
- Authors: Guiyu Zhao, Zhentao Guo, Zewen Du, Hongbin Ma,
- Abstract summary: We propose a density-robust feature extraction and matching scheme to achieve robust and accurate cross-source registration.<n>On the challenging Kinect-LiDAR scene in the cross-source 3DCSR dataset, our method improves feature matching recall by 63.5 percentage points (pp) and registration recall by 57.6 pp.
- Score: 0.7499722271664147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the density inconsistency and distribution difference between cross-source point clouds, previous methods fail in cross-source point cloud registration. We propose a density-robust feature extraction and matching scheme to achieve robust and accurate cross-source registration. To address the density inconsistency between cross-source data, we introduce a density-robust encoder for extracting density-robust features. To tackle the issue of challenging feature matching and few correct correspondences, we adopt a loose-to-strict matching pipeline with a ``loose generation, strict selection'' idea. Under it, we employ a one-to-many strategy to loosely generate initial correspondences. Subsequently, high-quality correspondences are strictly selected to achieve robust registration through sparse matching and dense matching. On the challenging Kinect-LiDAR scene in the cross-source 3DCSR dataset, our method improves feature matching recall by 63.5 percentage points (pp) and registration recall by 57.6 pp. It also achieves the best performance on 3DMatch, while maintaining robustness under diverse downsampling densities.
Related papers
- Cross3DReg: Towards a Large-scale Real-world Cross-source Point Cloud Registration Benchmark [57.42211080221526]
Cross-source point cloud registration, which aims to align point cloud data from different sensors, is a fundamental task in 3D vision.<n>The lack of publicly available large-scale real-world datasets for training the deep registration models, and the inherent differences in point clouds captured by multiple sensors pose challenges.<n>We construct Cross3DReg, the currently largest and real-world multi-modal cross-source point cloud registration dataset.<n>A visual-geometric attention guided matching module is proposed to enhance the consistency of cross-source point cloud features.
arXiv Detail & Related papers (2025-09-08T09:01:13Z) - Fully-Geometric Cross-Attention for Point Cloud Registration [51.865371511201765]
Point cloud registration approaches often fail when the overlap between point clouds is low due to noisy point correspondences.
This work introduces a novel cross-attention mechanism tailored for Transformer-based architectures that tackles this problem.
We integrate the Gromov-Wasserstein distance into the cross-attention formulation to jointly compute distances between points across different point clouds.
At the point level, we also devise a self-attention mechanism that aggregates the local geometric structure information into point features for fine matching.
arXiv Detail & Related papers (2025-02-12T10:44:36Z) - VRHCF: Cross-Source Point Cloud Registration via Voxel Representation and Hierarchical Correspondence Filtering [0.7499722271664147]
We present a novel framework for point cloud registration with broad applicability.
In cross-source point cloud registration, our method attains the best RR on the 3DCSR dataset, demonstrating a 9.3 percentage points improvement.
arXiv Detail & Related papers (2024-03-15T08:00:29Z) - HybridFusion: LiDAR and Vision Cross-Source Point Cloud Fusion [15.94976936555104]
We propose a cross-source point cloud fusion algorithm called HybridFusion.
It can register cross-source dense point clouds from different viewing angle in outdoor large scenes.
The proposed approach is evaluated comprehensively through qualitative and quantitative experiments.
arXiv Detail & Related papers (2023-04-10T10:54:54Z) - CrossLoc3D: Aerial-Ground Cross-Source 3D Place Recognition [45.16530801796705]
CrossLoc3D is a novel 3D place recognition method that solves a large-scale point matching problem in a cross-source setting.
We present CS-Campus3D, the first 3D aerial-ground cross-source dataset consisting of point cloud data from both aerial and ground LiDAR scans.
arXiv Detail & Related papers (2023-03-31T02:50:52Z) - Unsupervised Deep Probabilistic Approach for Partial Point Cloud
Registration [74.53755415380171]
Deep point cloud registration methods face challenges to partial overlaps and rely on labeled data.
We propose UDPReg, an unsupervised deep probabilistic registration framework for point clouds with partial overlaps.
Our UDPReg achieves competitive performance on the 3DMatch/3DLoMatch and ModelNet/ModelLoNet benchmarks.
arXiv Detail & Related papers (2023-03-23T14:18:06Z) - Reliability-Adaptive Consistency Regularization for Weakly-Supervised
Point Cloud Segmentation [80.07161039753043]
Weakly-supervised point cloud segmentation with extremely limited labels is desirable to alleviate the expensive costs of collecting densely annotated 3D points.
This paper explores applying the consistency regularization that is commonly used in weakly-supervised learning, for its point cloud counterpart with multiple data-specific augmentations.
We propose a novel Reliability-Adaptive Consistency Network (RAC-Net) to use both prediction confidence and model uncertainty to measure the reliability of pseudo labels.
arXiv Detail & Related papers (2023-03-09T10:41:57Z) - Overlap-guided Gaussian Mixture Models for Point Cloud Registration [61.250516170418784]
Probabilistic 3D point cloud registration methods have shown competitive performance in overcoming noise, outliers, and density variations.
This paper proposes a novel overlap-guided probabilistic registration approach that computes the optimal transformation from matched Gaussian Mixture Model (GMM) parameters.
arXiv Detail & Related papers (2022-10-17T08:02:33Z) - Learning to Register Unbalanced Point Pairs [10.369750912567714]
Recent 3D registration methods can effectively handle large-scale or partially overlapping point pairs.
We present a novel 3D registration method, called UPPNet, for the unbalanced point pairs.
arXiv Detail & Related papers (2022-07-09T08:03:59Z) - CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud
Registration [35.57761839361479]
CoFiNet - Coarse-to-Fine Network - extracts hierarchical correspondences from coarse to fine without keypoint detection.
Our model learns to match down-sampled nodes whose vicinity points share more overlap.
Point correspondences are then refined from the overlap areas of corresponding patches, by a density-adaptive matching module.
arXiv Detail & Related papers (2021-10-26T23:05:00Z) - Deep Hough Voting for Robust Global Registration [52.40611370293272]
We present an efficient framework for pairwise registration of real-world 3D scans, leveraging Hough voting in the 6D transformation parameter space.
Our method outperforms state-of-the-art methods on 3DMatch and 3DLoMatch benchmarks while achieving comparable performance on KITTI odometry dataset.
arXiv Detail & Related papers (2021-09-09T14:38:06Z) - Mapping of Sparse 3D Data using Alternating Projection [35.735398244213584]
We propose a novel technique to register sparse 3D scans in the absence of texture.
Existing methods such as KinectFusion heavily rely on dense point clouds.
We propose the use of a two-step alternating projection algorithm by formulating the registration as the simultaneous satisfaction of intersection and rigidity constraints.
arXiv Detail & Related papers (2020-10-04T17:40:30Z) - RPM-Net: Robust Point Matching using Learned Features [79.52112840465558]
RPM-Net is a less sensitive and more robust deep learning-based approach for rigid point cloud registration.
Unlike some existing methods, our RPM-Net handles missing correspondences and point clouds with partial visibility.
arXiv Detail & Related papers (2020-03-30T13:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.