MinkLoc3D: Point Cloud Based Large-Scale Place Recognition
- URL: http://arxiv.org/abs/2011.04530v1
- Date: Mon, 9 Nov 2020 16:11:52 GMT
- Title: MinkLoc3D: Point Cloud Based Large-Scale Place Recognition
- Authors: Jacek Komorowski
- Abstract summary: The paper presents a learning-based method for computing a discriminative 3D point cloud descriptor for place recognition purposes.
We present MinkLoc3D, to compute a discriminative 3D point cloud descriptor, based on a sparse voxelized point cloud representation and sparse 3D convolutions.
- Score: 1.116812194101501
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The paper presents a learning-based method for computing a discriminative 3D
point cloud descriptor for place recognition purposes. Existing methods, such
as PointNetVLAD, are based on unordered point cloud representation. They use
PointNet as the first processing step to extract local features, which are
later aggregated into a global descriptor. The PointNet architecture is not
well suited to capture local geometric structures. Thus, state-of-the-art
methods enhance vanilla PointNet architecture by adding different mechanism to
capture local contextual information, such as graph convolutional networks or
using hand-crafted features. We present an alternative approach, dubbed
MinkLoc3D, to compute a discriminative 3D point cloud descriptor, based on a
sparse voxelized point cloud representation and sparse 3D convolutions. The
proposed method has a simple and efficient architecture. Evaluation on standard
benchmarks proves that MinkLoc3D outperforms current state-of-the-art. Our code
is publicly available on the project website:
https://github.com/jac99/MinkLoc3D
Related papers
- Mini-PointNetPlus: a local feature descriptor in deep learning model for
3d environment perception [7.304195370862869]
We propose a novel local feature descriptor, mini-PointNetPlus, as an alternative for plug-and-play to PointNet.
Our basic idea is to separately project the data points to the individual features considered, each leading to a permutation invariant.
Due to fully utilizing the features by the proposed descriptor, we demonstrate in experiment a considerable performance improvement for 3D perception.
arXiv Detail & Related papers (2023-07-25T07:30:28Z) - Dynamic Clustering Transformer Network for Point Cloud Segmentation [23.149220817575195]
We propose a novel 3D point cloud representation network, called Dynamic Clustering Transformer Network (DCTNet)
It has an encoder-decoder architecture, allowing for both local and global feature learning.
Our method was evaluated on an object-based dataset (ShapeNet), an urban navigation dataset (Toronto-3D), and a multispectral LiDAR dataset.
arXiv Detail & Related papers (2023-05-30T01:11:05Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds [55.44204039410225]
We present a novel two-stage fully sparse convolutional 3D object detection framework, named CAGroup3D.
Our proposed method first generates some high-quality 3D proposals by leveraging the class-aware local group strategy on the object surface voxels.
To recover the features of missed voxels due to incorrect voxel-wise segmentation, we build a fully sparse convolutional RoI pooling module.
arXiv Detail & Related papers (2022-10-09T13:38:48Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - EgoNN: Egocentric Neural Network for Point Cloud Based 6DoF
Relocalization at the City Scale [15.662820454886202]
The paper presents a deep neural network-based method for global and local descriptors extraction from a point cloud acquired by a rotating 3D LiDAR.
Our method has a simple, fully convolutional architecture based on a sparse voxelized representation.
Our code and pretrained models are publicly available on the project website.
arXiv Detail & Related papers (2021-10-24T16:46:57Z) - TreeGCN-ED: Encoding Point Cloud using a Tree-Structured Graph Network [24.299931323012757]
This work proposes an autoencoder based framework to generate robust embeddings for point clouds.
We demonstrate the applicability of the proposed framework in applications like: 3D point cloud completion and Single image based 3D reconstruction.
arXiv Detail & Related papers (2021-10-07T03:52:56Z) - UPDesc: Unsupervised Point Descriptor Learning for Robust Registration [54.95201961399334]
UPDesc is an unsupervised method to learn point descriptors for robust point cloud registration.
We show that our learned descriptors yield superior performance over existing unsupervised methods.
arXiv Detail & Related papers (2021-08-05T17:11:08Z) - Revisiting Point Cloud Shape Classification with a Simple and Effective
Baseline [111.3236030935478]
We find that auxiliary factors like different evaluation schemes, data augmentation strategies, and loss functions make a large difference in performance.
A projection-based method, which we refer to as SimpleView, performs surprisingly well.
It achieves on par or better results than sophisticated state-of-the-art methods on ModelNet40 while being half the size of PointNet++.
arXiv Detail & Related papers (2021-06-09T18:01:11Z) - DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF
Relocalization [56.15308829924527]
We propose a Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points.
For detecting 3D keypoints we predict the discriminativeness of the local descriptors in an unsupervised manner.
Experiments on various benchmarks demonstrate that our method achieves competitive results for both global point cloud retrieval and local point cloud registration.
arXiv Detail & Related papers (2020-07-17T20:21:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.