RARE: Refine Any Registration of Pairwise Point Clouds via Zero-Shot Learning
- URL: http://arxiv.org/abs/2507.19950v1
- Date: Sat, 26 Jul 2025 13:34:39 GMT
- Title: RARE: Refine Any Registration of Pairwise Point Clouds via Zero-Shot Learning
- Authors: Chengyu Zheng, Jin Huang, Honghua Chen, Mingqiang Wei,
- Abstract summary: Recent research has demonstrated the potential of using diffusion features to establish semantic correspondences in images.<n>We propose a novel zero-shot method for refining point cloud registration algorithms.
- Score: 23.462795323028658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research leveraging large-scale pretrained diffusion models has demonstrated the potential of using diffusion features to establish semantic correspondences in images. Inspired by advancements in diffusion-based techniques, we propose a novel zero-shot method for refining point cloud registration algorithms. Our approach leverages correspondences derived from depth images to enhance point feature representations, eliminating the need for a dedicated training dataset. Specifically, we first project the point cloud into depth maps from multiple perspectives and extract implicit knowledge from a pretrained diffusion network as depth diffusion features. These features are then integrated with geometric features obtained from existing methods to establish more accurate correspondences between point clouds. By leveraging these refined correspondences, our approach achieves significantly improved registration accuracy. Extensive experiments demonstrate that our method not only enhances the performance of existing point cloud registration techniques but also exhibits robust generalization capabilities across diverse datasets. Codes are available at https://github.com/zhengcy-lambo/RARE.git.
Related papers
- Efficient Point Clouds Upsampling via Flow Matching [16.948354780275388]
Existing diffusion models struggle with inefficiencies as they map Gaussian noise to real point clouds.<n>We propose PUFM, a flow matching approach to directly map sparse point clouds to their high-fidelity dense counterparts.<n>Our method delivers superior upsampling quality but with fewer sampling steps.
arXiv Detail & Related papers (2025-01-25T17:50:53Z) - LPRnet: A self-supervised registration network for LiDAR and photogrammetric point clouds [38.42527849407057]
LiDAR and photogrammetry are active and passive remote sensing techniques for point cloud acquisition, respectively.<n>Due to the fundamental differences in sensing mechanisms, spatial distributions and coordinate systems, their point clouds exhibit significant discrepancies in density, precision, noise, and overlap.<n>This paper proposes a self-supervised registration network based on a masked autoencoder, focusing on heterogeneous LiDAR and photogrammetric point clouds.
arXiv Detail & Related papers (2025-01-10T02:36:37Z) - DV-Matcher: Deformation-based Non-Rigid Point Cloud Matching Guided by Pre-trained Visual Features [1.3030624795284795]
We present DV-Matcher, a learning-based framework for estimating dense correspondences between non-rigidly deformable point clouds.<n> Experimental results show that our method achieves state-of-the-art results in matching non-rigid point clouds in both near-isometric and heterogeneous shape collection.
arXiv Detail & Related papers (2024-08-16T07:02:19Z) - PointRegGPT: Boosting 3D Point Cloud Registration using Generative Point-Cloud Pairs for Training [90.06520673092702]
We present PointRegGPT, boosting 3D point cloud registration using generative point-cloud pairs for training.
To our knowledge, this is the first generative approach that explores realistic data generation for indoor point cloud registration.
arXiv Detail & Related papers (2024-07-19T06:29:57Z) - ComPC: Completing a 3D Point Cloud with 2D Diffusion Priors [52.72867922938023]
3D point clouds directly collected from objects through sensors are often incomplete due to self-occlusion.<n>We propose a test-time framework for completing partial point clouds across unseen categories without any requirement for training.
arXiv Detail & Related papers (2024-04-10T08:02:17Z) - Cross-Modal Information-Guided Network using Contrastive Learning for
Point Cloud Registration [17.420425069785946]
We present a novel Cross-Modal Information-Guided Network (CMIGNet) for point cloud registration.
We first incorporate the projected images from the point clouds and fuse the cross-modal features using the attention mechanism.
We employ two contrastive learning strategies, namely overlapping contrastive learning and cross-modal contrastive learning.
arXiv Detail & Related papers (2023-11-02T12:56:47Z) - FreeReg: Image-to-Point Cloud Registration Leveraging Pretrained Diffusion Models and Monocular Depth Estimators [37.39693977657165]
Matching cross-modality features between images and point clouds is a fundamental problem for image-to-point cloud registration.
We propose to unify the modality between images and point clouds by pretrained large-scale models first.
We show that the intermediate features, called diffusion features, extracted by depth-to-image diffusion models are semantically consistent between images and point clouds.
arXiv Detail & Related papers (2023-10-05T09:57:23Z) - Ponder: Point Cloud Pre-training via Neural Rendering [93.34522605321514]
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural encoders.
The learned point-cloud can be easily integrated into various downstream tasks, including not only high-level rendering tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image rendering.
arXiv Detail & Related papers (2022-12-31T08:58:39Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Revisiting Point Cloud Simplification: A Learnable Feature Preserving
Approach [57.67932970472768]
Mesh and Point Cloud simplification methods aim to reduce the complexity of 3D models while retaining visual quality and relevant salient features.
We propose a fast point cloud simplification method by learning to sample salient points.
The proposed method relies on a graph neural network architecture trained to select an arbitrary, user-defined, number of points from the input space and to re-arrange their positions so as to minimize the visual perception error.
arXiv Detail & Related papers (2021-09-30T10:23:55Z) - An Efficient Hypergraph Approach to Robust Point Cloud Resampling [57.49817398852218]
This work investigates point cloud resampling based on hypergraph signal processing (HGSP)
We design hypergraph spectral filters to capture multi-lateral interactions among the signal nodes of point clouds.
Our test results validate the high efficacy of hypergraph characterization of point clouds.
arXiv Detail & Related papers (2021-03-11T23:19:54Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.