CorrI2P: Deep Image-to-Point Cloud Registration via Dense Correspondence
- URL: http://arxiv.org/abs/2207.05483v1
- Date: Tue, 12 Jul 2022 11:49:31 GMT
- Title: CorrI2P: Deep Image-to-Point Cloud Registration via Dense Correspondence
- Authors: Siyu Ren, Yiming Zeng, Junhui Hou and Xiaodong Chen
- Abstract summary: We propose the first feature-based dense correspondence framework for addressing the image-to-point cloud registration problem, dubbed CorrI2P.
Specifically, given a pair of a 2D image before a 3D point cloud, we first transform them into high-dimensional feature space feed the features into a symmetric overlapping region to determine the region where the image point cloud overlap.
- Score: 51.91791056908387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motivated by the intuition that the critical step of localizing a 2D image in
the corresponding 3D point cloud is establishing 2D-3D correspondence between
them, we propose the first feature-based dense correspondence framework for
addressing the image-to-point cloud registration problem, dubbed CorrI2P, which
consists of three modules, i.e., feature embedding, symmetric overlapping
region detection, and pose estimation through the established correspondence.
Specifically, given a pair of a 2D image and a 3D point cloud, we first
transform them into high-dimensional feature space and feed the resulting
features into a symmetric overlapping region detector to determine the region
where the image and point cloud overlap each other. Then we use the features of
the overlapping regions to establish the 2D-3D correspondence before running
EPnP within RANSAC to estimate the camera's pose. Experimental results on KITTI
and NuScenes datasets show that our CorrI2P outperforms state-of-the-art
image-to-point cloud registration methods significantly. We will make the code
publicly available.
Related papers
- Robust 3D Point Clouds Classification based on Declarative Defenders [18.51700931775295]
3D point clouds are unstructured and sparse, while 2D images are structured and dense.
In this paper, we explore three distinct algorithms for mapping 3D point clouds into 2D images.
The proposed approaches demonstrate superior accuracy and robustness against adversarial attacks.
arXiv Detail & Related papers (2024-10-13T01:32:38Z) - Self-supervised Learning of LiDAR 3D Point Clouds via 2D-3D Neural Calibration [107.61458720202984]
This paper introduces a novel self-supervised learning framework for enhancing 3D perception in autonomous driving scenes.
We propose the learnable transformation alignment to bridge the domain gap between image and point cloud data.
We establish dense 2D-3D correspondences to estimate the rigid pose.
arXiv Detail & Related papers (2024-01-23T02:41:06Z) - EP2P-Loc: End-to-End 3D Point to 2D Pixel Localization for Large-Scale
Visual Localization [44.05930316729542]
We propose EP2P-Loc, a novel large-scale visual localization method for 3D point clouds.
To increase the number of inliers, we propose a simple algorithm to remove invisible 3D points in the image.
For the first time in this task, we employ a differentiable for end-to-end training.
arXiv Detail & Related papers (2023-09-14T07:06:36Z) - End-to-end 2D-3D Registration between Image and LiDAR Point Cloud for
Vehicle Localization [45.81385500855306]
We present I2PNet, a novel end-to-end 2D-3D registration network.
I2PNet directly registers the raw 3D point cloud with the 2D RGB image using differential modules with a unique target.
We conduct extensive localization experiments on the KITTI Odometry and nuScenes datasets.
arXiv Detail & Related papers (2023-06-20T07:28:40Z) - CheckerPose: Progressive Dense Keypoint Localization for Object Pose
Estimation with Graph Neural Network [66.24726878647543]
Estimating the 6-DoF pose of a rigid object from a single RGB image is a crucial yet challenging task.
Recent studies have shown the great potential of dense correspondence-based solutions.
We propose a novel pose estimation algorithm named CheckerPose, which improves on three main aspects.
arXiv Detail & Related papers (2023-03-29T17:30:53Z) - PCR-CG: Point Cloud Registration via Deep Explicit Color and Geometry [28.653015760036602]
We introduce a novel 3D point cloud registration module explicitly embedding the color signals into the geometry representation.
Our key contribution is a 2D-3D cross-modality learning algorithm that embeds the deep features learned from color signals to the geometry representation.
Our study reveals a significant advantages of correlating explicit deep color features to the point cloud in the registration task.
arXiv Detail & Related papers (2023-02-28T08:50:17Z) - DeepI2P: Image-to-Point Cloud Registration via Deep Classification [71.3121124994105]
DeepI2P is a novel approach for cross-modality registration between an image and a point cloud.
Our method estimates the relative rigid transformation between the coordinate frames of the camera and Lidar.
We circumvent the difficulty by converting the registration problem into a classification and inverse camera projection optimization problem.
arXiv Detail & Related papers (2021-04-08T04:27:32Z) - P2-Net: Joint Description and Detection of Local Features for Pixel and
Point Matching [78.18641868402901]
This work takes the initiative to establish fine-grained correspondences between 2D images and 3D point clouds.
An ultra-wide reception mechanism in combination with a novel loss function are designed to mitigate the intrinsic information variations between pixel and point local regions.
arXiv Detail & Related papers (2021-03-01T14:59:40Z) - Learning 2D-3D Correspondences To Solve The Blind Perspective-n-Point
Problem [98.92148855291363]
This paper proposes a deep CNN model which simultaneously solves for both 6-DoF absolute camera pose 2D--3D correspondences.
Tests on both real and simulated data have shown that our method substantially outperforms existing approaches.
arXiv Detail & Related papers (2020-03-15T04:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.