DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF
Relocalization
- URL: http://arxiv.org/abs/2007.09217v1
- Date: Fri, 17 Jul 2020 20:21:22 GMT
- Title: DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF
Relocalization
- Authors: Juan Du, Rui Wang, Daniel Cremers
- Abstract summary: We propose a Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points.
For detecting 3D keypoints we predict the discriminativeness of the local descriptors in an unsupervised manner.
Experiments on various benchmarks demonstrate that our method achieves competitive results for both global point cloud retrieval and local point cloud registration.
- Score: 56.15308829924527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For relocalization in large-scale point clouds, we propose the first approach
that unifies global place recognition and local 6DoF pose refinement. To this
end, we design a Siamese network that jointly learns 3D local feature detection
and description directly from raw 3D points. It integrates FlexConv and
Squeeze-and-Excitation (SE) to assure that the learned local descriptor
captures multi-level geometric information and channel-wise relations. For
detecting 3D keypoints we predict the discriminativeness of the local
descriptors in an unsupervised manner. We generate the global descriptor by
directly aggregating the learned local descriptors with an effective attention
mechanism. In this way, local and global 3D descriptors are inferred in one
single forward pass. Experiments on various benchmarks demonstrate that our
method achieves competitive results for both global point cloud retrieval and
local point cloud registration in comparison to state-of-the-art approaches. To
validate the generalizability and robustness of our 3D keypoints, we
demonstrate that our method also performs favorably without fine-tuning on the
registration of point clouds that were generated by a visual SLAM system. Code
and related materials are available at
https://vision.in.tum.de/research/vslam/dh3d.
Related papers
- OpenGaussian: Towards Point-Level 3D Gaussian-based Open Vocabulary Understanding [54.981605111365056]
This paper introduces OpenGaussian, a method based on 3D Gaussian Splatting (3DGS) capable of 3D point-level open vocabulary understanding.
Our primary motivation stems from observing that existing 3DGS-based open vocabulary methods mainly focus on 2D pixel-level parsing.
arXiv Detail & Related papers (2024-06-04T07:42:33Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds [55.44204039410225]
We present a novel two-stage fully sparse convolutional 3D object detection framework, named CAGroup3D.
Our proposed method first generates some high-quality 3D proposals by leveraging the class-aware local group strategy on the object surface voxels.
To recover the features of missed voxels due to incorrect voxel-wise segmentation, we build a fully sparse convolutional RoI pooling module.
arXiv Detail & Related papers (2022-10-09T13:38:48Z) - EgoNN: Egocentric Neural Network for Point Cloud Based 6DoF
Relocalization at the City Scale [15.662820454886202]
The paper presents a deep neural network-based method for global and local descriptors extraction from a point cloud acquired by a rotating 3D LiDAR.
Our method has a simple, fully convolutional architecture based on a sparse voxelized representation.
Our code and pretrained models are publicly available on the project website.
arXiv Detail & Related papers (2021-10-24T16:46:57Z) - UPDesc: Unsupervised Point Descriptor Learning for Robust Registration [54.95201961399334]
UPDesc is an unsupervised method to learn point descriptors for robust point cloud registration.
We show that our learned descriptors yield superior performance over existing unsupervised methods.
arXiv Detail & Related papers (2021-08-05T17:11:08Z) - MinkLoc3D: Point Cloud Based Large-Scale Place Recognition [1.116812194101501]
The paper presents a learning-based method for computing a discriminative 3D point cloud descriptor for place recognition purposes.
We present MinkLoc3D, to compute a discriminative 3D point cloud descriptor, based on a sparse voxelized point cloud representation and sparse 3D convolutions.
arXiv Detail & Related papers (2020-11-09T16:11:52Z) - Distinctive 3D local deep descriptors [2.512827436728378]
Point cloud patches are extracted, canonicalised with respect to their estimated local reference frame and encoded by a PointNet-based deep neural network.
We evaluate and compare DIPs against alternative hand-crafted and deep descriptors on several datasets consisting of point clouds reconstructed using different sensors.
arXiv Detail & Related papers (2020-09-01T06:25:06Z) - D3Feat: Joint Learning of Dense Detection and Description of 3D Local
Features [51.04841465193678]
We leverage a 3D fully convolutional network for 3D point clouds.
We propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point.
Our method achieves state-of-the-art results in both indoor and outdoor scenarios.
arXiv Detail & Related papers (2020-03-06T12:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.