DeepI2P: Image-to-Point Cloud Registration via Deep Classification
- URL: http://arxiv.org/abs/2104.03501v1
- Date: Thu, 8 Apr 2021 04:27:32 GMT
- Title: DeepI2P: Image-to-Point Cloud Registration via Deep Classification
- Authors: Jiaxin Li, Gim Hee Lee
- Abstract summary: DeepI2P is a novel approach for cross-modality registration between an image and a point cloud.
Our method estimates the relative rigid transformation between the coordinate frames of the camera and Lidar.
We circumvent the difficulty by converting the registration problem into a classification and inverse camera projection optimization problem.
- Score: 71.3121124994105
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents DeepI2P: a novel approach for cross-modality registration
between an image and a point cloud. Given an image (e.g. from a rgb-camera) and
a general point cloud (e.g. from a 3D Lidar scanner) captured at different
locations in the same scene, our method estimates the relative rigid
transformation between the coordinate frames of the camera and Lidar. Learning
common feature descriptors to establish correspondences for the registration is
inherently challenging due to the lack of appearance and geometric correlations
across the two modalities. We circumvent the difficulty by converting the
registration problem into a classification and inverse camera projection
optimization problem. A classification neural network is designed to label
whether the projection of each point in the point cloud is within or beyond the
camera frustum. These labeled points are subsequently passed into a novel
inverse camera projection solver to estimate the relative pose. Extensive
experimental results on Oxford Robotcar and KITTI datasets demonstrate the
feasibility of our approach. Our source code is available at
https://github.com/lijx10/DeepI2P
Related papers
- Self-supervised Learning of LiDAR 3D Point Clouds via 2D-3D Neural Calibration [107.61458720202984]
This paper introduces a novel self-supervised learning framework for enhancing 3D perception in autonomous driving scenes.
We propose the learnable transformation alignment to bridge the domain gap between image and point cloud data.
We establish dense 2D-3D correspondences to estimate the rigid pose.
arXiv Detail & Related papers (2024-01-23T02:41:06Z) - 2D3D-MATR: 2D-3D Matching Transformer for Detection-free Registration
between Images and Point Clouds [38.425876064671435]
We propose 2D3D-MATR, a detection-free method for accurate and robust registration between images and point clouds.
Our method adopts a coarse-to-fine pipeline where it first computes coarse correspondences between downsampled patches of the input image and the point cloud.
To resolve the scale ambiguity in patch matching, we construct a multi-scale pyramid for each image patch and learn to find for each point patch the best matching image patch at a proper resolution level.
arXiv Detail & Related papers (2023-08-10T16:10:54Z) - Quantity-Aware Coarse-to-Fine Correspondence for Image-to-Point Cloud
Registration [4.954184310509112]
Image-to-point cloud registration aims to determine the relative camera pose between an RGB image and a reference point cloud.
Matching individual points with pixels can be inherently ambiguous due to modality gaps.
We propose a framework to capture quantity-aware correspondences between local point sets and pixel patches.
arXiv Detail & Related papers (2023-07-14T03:55:54Z) - CheckerPose: Progressive Dense Keypoint Localization for Object Pose
Estimation with Graph Neural Network [66.24726878647543]
Estimating the 6-DoF pose of a rigid object from a single RGB image is a crucial yet challenging task.
Recent studies have shown the great potential of dense correspondence-based solutions.
We propose a novel pose estimation algorithm named CheckerPose, which improves on three main aspects.
arXiv Detail & Related papers (2023-03-29T17:30:53Z) - CorrI2P: Deep Image-to-Point Cloud Registration via Dense Correspondence [51.91791056908387]
We propose the first feature-based dense correspondence framework for addressing the image-to-point cloud registration problem, dubbed CorrI2P.
Specifically, given a pair of a 2D image before a 3D point cloud, we first transform them into high-dimensional feature space feed the features into a symmetric overlapping region to determine the region where the image point cloud overlap.
arXiv Detail & Related papers (2022-07-12T11:49:31Z) - PCAM: Product of Cross-Attention Matrices for Rigid Registration of
Point Clouds [79.99653758293277]
PCAM is a neural network whose key element is a pointwise product of cross-attention matrices.
We show that PCAM achieves state-of-the-art results among methods which, like us, solve steps (a) and (b) jointly via deepnets.
arXiv Detail & Related papers (2021-10-04T09:23:27Z) - Robust Place Recognition using an Imaging Lidar [45.37172889338924]
We propose a methodology for robust, real-time place recognition using an imaging lidar.
Our method is truly-invariant and can tackle reverse revisiting and upside-down revisiting.
arXiv Detail & Related papers (2021-03-03T01:08:31Z) - Unconstrained Matching of 2D and 3D Descriptors for 6-DOF Pose
Estimation [44.66818851668686]
We generate a dataset of matching 2D and 3D points and their corresponding feature descriptors.
To localize the pose of an image at test time, we extract keypoints and feature descriptors from the query image.
The locations of the matched features are used in a robust pose estimation algorithm to predict the location and orientation of the query image.
arXiv Detail & Related papers (2020-05-29T11:17:32Z) - Learning 2D-3D Correspondences To Solve The Blind Perspective-n-Point
Problem [98.92148855291363]
This paper proposes a deep CNN model which simultaneously solves for both 6-DoF absolute camera pose 2D--3D correspondences.
Tests on both real and simulated data have shown that our method substantially outperforms existing approaches.
arXiv Detail & Related papers (2020-03-15T04:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.