Unconstrained Matching of 2D and 3D Descriptors for 6-DOF Pose
Estimation
- URL: http://arxiv.org/abs/2005.14502v1
- Date: Fri, 29 May 2020 11:17:32 GMT
- Title: Unconstrained Matching of 2D and 3D Descriptors for 6-DOF Pose
Estimation
- Authors: Uzair Nadeem, Mohammed Bennamoun, Roberto Togneri, Ferdous Sohel
- Abstract summary: We generate a dataset of matching 2D and 3D points and their corresponding feature descriptors.
To localize the pose of an image at test time, we extract keypoints and feature descriptors from the query image.
The locations of the matched features are used in a robust pose estimation algorithm to predict the location and orientation of the query image.
- Score: 44.66818851668686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a novel concept to directly match feature descriptors
extracted from 2D images with feature descriptors extracted from 3D point
clouds. We use this concept to directly localize images in a 3D point cloud. We
generate a dataset of matching 2D and 3D points and their corresponding feature
descriptors, which is used to learn a Descriptor-Matcher classifier. To
localize the pose of an image at test time, we extract keypoints and feature
descriptors from the query image. The trained Descriptor-Matcher is then used
to match the features from the image and the point cloud. The locations of the
matched features are used in a robust pose estimation algorithm to predict the
location and orientation of the query image. We carried out an extensive
evaluation of the proposed method for indoor and outdoor scenarios and with
different types of point clouds to verify the feasibility of our approach.
Experimental results demonstrate that direct matching of feature descriptors
from images and point clouds is not only a viable idea but can also be reliably
used to estimate the 6-DOF poses of query cameras in any type of 3D point cloud
in an unconstrained manner with high precision.
Related papers
- Robust 3D Point Clouds Classification based on Declarative Defenders [18.51700931775295]
3D point clouds are unstructured and sparse, while 2D images are structured and dense.
In this paper, we explore three distinct algorithms for mapping 3D point clouds into 2D images.
The proposed approaches demonstrate superior accuracy and robustness against adversarial attacks.
arXiv Detail & Related papers (2024-10-13T01:32:38Z) - Neural Correspondence Field for Object Pose Estimation [67.96767010122633]
We propose a method for estimating the 6DoF pose of a rigid object with an available 3D model from a single RGB image.
Unlike classical correspondence-based methods which predict 3D object coordinates at pixels of the input image, the proposed method predicts 3D object coordinates at 3D query points sampled in the camera frustum.
arXiv Detail & Related papers (2022-07-30T01:48:23Z) - DeepI2P: Image-to-Point Cloud Registration via Deep Classification [71.3121124994105]
DeepI2P is a novel approach for cross-modality registration between an image and a point cloud.
Our method estimates the relative rigid transformation between the coordinate frames of the camera and Lidar.
We circumvent the difficulty by converting the registration problem into a classification and inverse camera projection optimization problem.
arXiv Detail & Related papers (2021-04-08T04:27:32Z) - Robust Place Recognition using an Imaging Lidar [45.37172889338924]
We propose a methodology for robust, real-time place recognition using an imaging lidar.
Our method is truly-invariant and can tackle reverse revisiting and upside-down revisiting.
arXiv Detail & Related papers (2021-03-03T01:08:31Z) - P2-Net: Joint Description and Detection of Local Features for Pixel and
Point Matching [78.18641868402901]
This work takes the initiative to establish fine-grained correspondences between 2D images and 3D point clouds.
An ultra-wide reception mechanism in combination with a novel loss function are designed to mitigate the intrinsic information variations between pixel and point local regions.
arXiv Detail & Related papers (2021-03-01T14:59:40Z) - DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF
Relocalization [56.15308829924527]
We propose a Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points.
For detecting 3D keypoints we predict the discriminativeness of the local descriptors in an unsupervised manner.
Experiments on various benchmarks demonstrate that our method achieves competitive results for both global point cloud retrieval and local point cloud registration.
arXiv Detail & Related papers (2020-07-17T20:21:22Z) - 3D Object Detection Method Based on YOLO and K-Means for Image and Point
Clouds [1.9458156037869139]
Lidar based 3D object detection and classification tasks are essential for autonomous driving.
This paper proposes a 3D object detection method based on point cloud and image.
arXiv Detail & Related papers (2020-04-21T04:32:36Z) - Learning 2D-3D Correspondences To Solve The Blind Perspective-n-Point
Problem [98.92148855291363]
This paper proposes a deep CNN model which simultaneously solves for both 6-DoF absolute camera pose 2D--3D correspondences.
Tests on both real and simulated data have shown that our method substantially outperforms existing approaches.
arXiv Detail & Related papers (2020-03-15T04:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.