RIGA: Rotation-Invariant and Globally-Aware Descriptors for Point Cloud
Registration
- URL: http://arxiv.org/abs/2209.13252v1
- Date: Tue, 27 Sep 2022 08:45:56 GMT
- Title: RIGA: Rotation-Invariant and Globally-Aware Descriptors for Point Cloud
Registration
- Authors: Hao Yu, Ji Hou, Zheng Qin, Mahdi Saleh, Ivan Shugurov, Kai Wang,
Benjamin Busam, Slobodan Ilic
- Abstract summary: We introduce RIGA to learn descriptors that are Rotation-Invariant by design and Globally-Aware.
RIGA surpasses the state-of-the-art methods by a margin of 8degree in terms of the Relative Rotation Error on ModelNet40 and improves the Feature Matching Recall by at least 5 percentage points on 3DLoMatch.
- Score: 44.23935553097983
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Successful point cloud registration relies on accurate correspondences
established upon powerful descriptors. However, existing neural descriptors
either leverage a rotation-variant backbone whose performance declines under
large rotations, or encode local geometry that is less distinctive. To address
this issue, we introduce RIGA to learn descriptors that are Rotation-Invariant
by design and Globally-Aware. From the Point Pair Features (PPFs) of sparse
local regions, rotation-invariant local geometry is encoded into geometric
descriptors. Global awareness of 3D structures and geometric context is
subsequently incorporated, both in a rotation-invariant fashion. More
specifically, 3D structures of the whole frame are first represented by our
global PPF signatures, from which structural descriptors are learned to help
geometric descriptors sense the 3D world beyond local regions. Geometric
context from the whole scene is then globally aggregated into descriptors.
Finally, the description of sparse regions is interpolated to dense point
descriptors, from which correspondences are extracted for registration. To
validate our approach, we conduct extensive experiments on both object- and
scene-level data. With large rotations, RIGA surpasses the state-of-the-art
methods by a margin of 8\degree in terms of the Relative Rotation Error on
ModelNet40 and improves the Feature Matching Recall by at least 5 percentage
points on 3DLoMatch.
Related papers
- Rethinking Rotation Invariance with Point Cloud Registration [18.829454172955202]
We propose an effective framework for rotation invariance learning via three sequential stages, namely rotation-invariant shape encoding, aligned feature integration, and deep feature registration.
Experimental results on 3D shape classification, part segmentation, and retrieval tasks prove the feasibility of our work.
arXiv Detail & Related papers (2022-12-31T08:17:09Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Robust Kernel-based Feature Representation for 3D Point Cloud Analysis
via Circular Graph Convolutional Network [2.42919716430661]
We present a new local feature description method that is robust to rotation, density, and scale variations.
To improve representations of the local descriptors, we propose a global aggregation method.
Our method shows superior performances when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-12-22T18:02:57Z) - Learning Geometry-Disentangled Representation for Complementary
Understanding of 3D Object Point Cloud [50.56461318879761]
We propose Geometry-Disentangled Attention Network (GDANet) for 3D image processing.
GDANet disentangles point clouds into contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components.
Experiments on 3D object classification and segmentation benchmarks demonstrate that GDANet achieves the state-of-the-arts with fewer parameters.
arXiv Detail & Related papers (2020-12-20T13:35:00Z) - SpinNet: Learning a General Surface Descriptor for 3D Point Cloud
Registration [57.28608414782315]
We introduce a new, yet conceptually simple, neural architecture, termed SpinNet, to extract local features.
Experiments on both indoor and outdoor datasets demonstrate that SpinNet outperforms existing state-of-the-art techniques.
arXiv Detail & Related papers (2020-11-24T15:00:56Z) - Rotation-Invariant Local-to-Global Representation Learning for 3D Point
Cloud [42.86112554931754]
We propose a local-to-global representation learning algorithm for 3D point cloud data.
Our model takes advantage of multi-level abstraction based on graph convolutional neural networks.
The proposed algorithm presents the state-of-the-art performance on the rotation-augmented 3D object recognition and segmentation benchmarks.
arXiv Detail & Related papers (2020-10-07T10:30:20Z) - DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF
Relocalization [56.15308829924527]
We propose a Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points.
For detecting 3D keypoints we predict the discriminativeness of the local descriptors in an unsupervised manner.
Experiments on various benchmarks demonstrate that our method achieves competitive results for both global point cloud retrieval and local point cloud registration.
arXiv Detail & Related papers (2020-07-17T20:21:22Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.