PCR-CG: Point Cloud Registration via Deep Explicit Color and Geometry
- URL: http://arxiv.org/abs/2302.14418v2
- Date: Sat, 30 Dec 2023 03:27:01 GMT
- Title: PCR-CG: Point Cloud Registration via Deep Explicit Color and Geometry
- Authors: Yu Zhang, Junle Yu, Xiaolin Huang, Wenhui Zhou, Ji Hou
- Abstract summary: We introduce a novel 3D point cloud registration module explicitly embedding the color signals into the geometry representation.
Our key contribution is a 2D-3D cross-modality learning algorithm that embeds the deep features learned from color signals to the geometry representation.
Our study reveals a significant advantages of correlating explicit deep color features to the point cloud in the registration task.
- Score: 28.653015760036602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce PCR-CG: a novel 3D point cloud registration
module explicitly embedding the color signals into the geometry representation.
Different from previous methods that only use geometry representation, our
module is specifically designed to effectively correlate color into geometry
for the point cloud registration task. Our key contribution is a 2D-3D
cross-modality learning algorithm that embeds the deep features learned from
color signals to the geometry representation. With our designed 2D-3D
projection module, the pixel features in a square region centered at
correspondences perceived from images are effectively correlated with point
clouds. In this way, the overlapped regions can be inferred not only from point
cloud but also from the texture appearances. Adding color is non-trivial. We
compare against a variety of baselines designed for adding color to 3D, such as
exhaustively adding per-pixel features or RGB values in an implicit manner. We
leverage Predator [25] as the baseline method and incorporate our proposed
module onto it. To validate the effectiveness of 2D features, we ablate
different 2D pre-trained networks and show a positive correlation between the
pre-trained weights and the task performance. Our experimental results indicate
a significant improvement of 6.5% registration recall over the baseline method
on the 3DLoMatch benchmark. We additionally evaluate our approach on SOTA
methods and observe consistent improvements, such as an improvement of 2.4%
registration recall over GeoTransformer as well as 3.5% over CoFiNet. Our study
reveals a significant advantages of correlating explicit deep color features to
the point cloud in the registration task.
Related papers
- Self-supervised Learning of LiDAR 3D Point Clouds via 2D-3D Neural Calibration [107.61458720202984]
This paper introduces a novel self-supervised learning framework for enhancing 3D perception in autonomous driving scenes.
We propose the learnable transformation alignment to bridge the domain gap between image and point cloud data.
We establish dense 2D-3D correspondences to estimate the rigid pose.
arXiv Detail & Related papers (2024-01-23T02:41:06Z) - CheckerPose: Progressive Dense Keypoint Localization for Object Pose
Estimation with Graph Neural Network [66.24726878647543]
Estimating the 6-DoF pose of a rigid object from a single RGB image is a crucial yet challenging task.
Recent studies have shown the great potential of dense correspondence-based solutions.
We propose a novel pose estimation algorithm named CheckerPose, which improves on three main aspects.
arXiv Detail & Related papers (2023-03-29T17:30:53Z) - GQE-Net: A Graph-based Quality Enhancement Network for Point Cloud Color
Attribute [51.4803148196217]
We propose a graph-based quality enhancement network (GQE-Net) to reduce color distortion in point clouds.
GQE-Net uses geometry information as an auxiliary input and graph convolution blocks to extract local features efficiently.
Experimental results show that our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-03-24T02:33:45Z) - Improving RGB-D Point Cloud Registration by Learning Multi-scale Local
Linear Transformation [38.64501645574878]
Point cloud registration aims at estimating the geometric transformation between two point cloud scans.
Recent point cloud registration methods have tried to apply RGB-D data to achieve more accurate correspondence.
We propose a new Geometry-Aware Visual Feature Extractor (GAVE) that employs multi-scale local linear transformation.
arXiv Detail & Related papers (2022-08-31T14:36:09Z) - CorrI2P: Deep Image-to-Point Cloud Registration via Dense Correspondence [51.91791056908387]
We propose the first feature-based dense correspondence framework for addressing the image-to-point cloud registration problem, dubbed CorrI2P.
Specifically, given a pair of a 2D image before a 3D point cloud, we first transform them into high-dimensional feature space feed the features into a symmetric overlapping region to determine the region where the image point cloud overlap.
arXiv Detail & Related papers (2022-07-12T11:49:31Z) - Learning Geometry-Disentangled Representation for Complementary
Understanding of 3D Object Point Cloud [50.56461318879761]
We propose Geometry-Disentangled Attention Network (GDANet) for 3D image processing.
GDANet disentangles point clouds into contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components.
Experiments on 3D object classification and segmentation benchmarks demonstrate that GDANet achieves the state-of-the-arts with fewer parameters.
arXiv Detail & Related papers (2020-12-20T13:35:00Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.