Quantity-Aware Coarse-to-Fine Correspondence for Image-to-Point Cloud
Registration
- URL: http://arxiv.org/abs/2307.07142v2
- Date: Thu, 18 Jan 2024 11:30:47 GMT
- Title: Quantity-Aware Coarse-to-Fine Correspondence for Image-to-Point Cloud
Registration
- Authors: Gongxin Yao, Yixin Xuan, Yiwei Chen and Yu Pan
- Abstract summary: Image-to-point cloud registration aims to determine the relative camera pose between an RGB image and a reference point cloud.
Matching individual points with pixels can be inherently ambiguous due to modality gaps.
We propose a framework to capture quantity-aware correspondences between local point sets and pixel patches.
- Score: 4.954184310509112
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-to-point cloud registration aims to determine the relative camera pose
between an RGB image and a reference point cloud, serving as a general solution
for locating 3D objects from 2D observations. Matching individual points with
pixels can be inherently ambiguous due to modality gaps. To address this
challenge, we propose a framework to capture quantity-aware correspondences
between local point sets and pixel patches and refine the results at both the
point and pixel levels. This framework aligns the high-level semantics of point
sets and pixel patches to improve the matching accuracy. On a coarse scale, the
set-to-patch correspondence is expected to be influenced by the quantity of 3D
points. To achieve this, a novel supervision strategy is proposed to adaptively
quantify the degrees of correlation as continuous values. On a finer scale,
point-to-pixel correspondences are refined from a smaller search space through
a well-designed scheme, which incorporates both resampling and quantity-aware
priors. Particularly, a confidence sorting strategy is proposed to
proportionally select better correspondences at the final stage. Leveraging the
advantages of high-quality correspondences, the problem is successfully
resolved using an efficient Perspective-n-Point solver within the framework of
random sample consensus (RANSAC). Extensive experiments on the KITTI Odometry
and NuScenes datasets demonstrate the superiority of our method over the
state-of-the-art methods.
Related papers
- Geometry-aware Feature Matching for Large-Scale Structure from Motion [10.645087195983201]
We introduce geometry cues in addition to color cues to fill gaps when there is less overlap in large-scale scenarios.
Our approach ensures that the denser correspondences from detector-free methods are geometrically consistent and more accurate.
It outperforms state-of-the-art feature matching methods on benchmark datasets.
arXiv Detail & Related papers (2024-09-03T21:41:35Z) - Multiway Point Cloud Mosaicking with Diffusion and Global Optimization [74.3802812773891]
We introduce a novel framework for multiway point cloud mosaicking (named Wednesday)
At the core of our approach is ODIN, a learned pairwise registration algorithm that identifies overlaps and refines attention scores.
Tested on four diverse, large-scale datasets, our method state-of-the-art pairwise and rotation registration results by a large margin on all benchmarks.
arXiv Detail & Related papers (2024-03-30T17:29:13Z) - Differentiable Registration of Images and LiDAR Point Clouds with
VoxelPoint-to-Pixel Matching [58.10418136917358]
Cross-modality registration between 2D images from cameras and 3D point clouds from LiDARs is a crucial task in computer vision and robotic training.
Previous methods estimate 2D-3D correspondences by matching point and pixel patterns learned by neural networks.
We learn a structured cross-modality matching solver to represent 3D features via a different latent pixel space.
arXiv Detail & Related papers (2023-12-07T05:46:10Z) - Grad-PU: Arbitrary-Scale Point Cloud Upsampling via Gradient Descent
with Learned Distance Functions [77.32043242988738]
We propose a new framework for accurate point cloud upsampling that supports arbitrary upsampling rates.
Our method first interpolates the low-res point cloud according to a given upsampling rate.
arXiv Detail & Related papers (2023-04-24T06:36:35Z) - CheckerPose: Progressive Dense Keypoint Localization for Object Pose
Estimation with Graph Neural Network [66.24726878647543]
Estimating the 6-DoF pose of a rigid object from a single RGB image is a crucial yet challenging task.
Recent studies have shown the great potential of dense correspondence-based solutions.
We propose a novel pose estimation algorithm named CheckerPose, which improves on three main aspects.
arXiv Detail & Related papers (2023-03-29T17:30:53Z) - ObjectMatch: Robust Registration using Canonical Object Correspondences [21.516657643120375]
We present ObjectMatch, a semantic and object-centric camera pose estimator for RGB-D SLAM pipelines.
In registering RGB-D sequences, our method outperforms cutting-edge SLAM baselines in challenging, low-frame-rate scenarios.
arXiv Detail & Related papers (2022-12-05T02:38:08Z) - DFC: Deep Feature Consistency for Robust Point Cloud Registration [0.4724825031148411]
We present a novel learning-based alignment network for complex alignment scenes.
We validate our approach on the 3DMatch dataset and the KITTI odometry dataset.
arXiv Detail & Related papers (2021-11-15T08:27:21Z) - PCAM: Product of Cross-Attention Matrices for Rigid Registration of
Point Clouds [79.99653758293277]
PCAM is a neural network whose key element is a pointwise product of cross-attention matrices.
We show that PCAM achieves state-of-the-art results among methods which, like us, solve steps (a) and (b) jointly via deepnets.
arXiv Detail & Related papers (2021-10-04T09:23:27Z) - DeepI2P: Image-to-Point Cloud Registration via Deep Classification [71.3121124994105]
DeepI2P is a novel approach for cross-modality registration between an image and a point cloud.
Our method estimates the relative rigid transformation between the coordinate frames of the camera and Lidar.
We circumvent the difficulty by converting the registration problem into a classification and inverse camera projection optimization problem.
arXiv Detail & Related papers (2021-04-08T04:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.