Robust Kernel-based Feature Representation for 3D Point Cloud Analysis
via Circular Graph Convolutional Network
- URL: http://arxiv.org/abs/2012.12215v4
- Date: Thu, 14 Jan 2021 02:37:19 GMT
- Title: Robust Kernel-based Feature Representation for 3D Point Cloud Analysis
via Circular Graph Convolutional Network
- Authors: Seung Hwan Jung, Minyoung Chung, and Yeong-Gil Shin
- Abstract summary: We present a new local feature description method that is robust to rotation, density, and scale variations.
To improve representations of the local descriptors, we propose a global aggregation method.
Our method shows superior performances when compared to the state-of-the-art methods.
- Score: 2.42919716430661
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature descriptors of point clouds are used in several applications, such as
registration and part segmentation of 3D point clouds. Learning discriminative
representations of local geometric features is unquestionably the most
important task for accurate point cloud analyses. However, it is challenging to
develop rotation or scale-invariant descriptors. Most previous studies have
either ignored rotations or empirically studied optimal scale parameters, which
hinders the applicability of the methods for real-world datasets. In this
paper, we present a new local feature description method that is robust to
rotation, density, and scale variations. Moreover, to improve representations
of the local descriptors, we propose a global aggregation method. First, we
place kernels aligned around each point in the normal direction. To avoid the
sign problem of the normal vector, we use a symmetric kernel point distribution
in the tangential plane. From each kernel point, we first projected the points
from the spatial space to the feature space, which is robust to multiple scales
and rotation, based on angles and distances. Subsequently, we perform graph
convolutions by considering local kernel point structures and long-range global
context, obtained by a global aggregation method. We experimented with our
proposed descriptors on benchmark datasets (i.e., ModelNet40 and ShapeNetPart)
to evaluate the performance of registration, classification, and part
segmentation on 3D point clouds. Our method showed superior performances when
compared to the state-of-the-art methods by reducing 70$\%$ of the rotation and
translation errors in the registration task. Our method also showed comparable
performance in the classification and part-segmentation tasks with simple and
low-dimensional architectures.
Related papers
- Quadric Representations for LiDAR Odometry, Mapping and Localization [93.24140840537912]
Current LiDAR odometry, mapping and localization methods leverage point-wise representations of 3D scenes.
We propose a novel method of describing scenes using quadric surfaces, which are far more compact representations of 3D objects.
Our method maintains low latency and memory utility while achieving competitive, and even superior, accuracy.
arXiv Detail & Related papers (2023-04-27T13:52:01Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - RIGA: Rotation-Invariant and Globally-Aware Descriptors for Point Cloud
Registration [44.23935553097983]
We introduce RIGA to learn descriptors that are Rotation-Invariant by design and Globally-Aware.
RIGA surpasses the state-of-the-art methods by a margin of 8degree in terms of the Relative Rotation Error on ModelNet40 and improves the Feature Matching Recall by at least 5 percentage points on 3DLoMatch.
arXiv Detail & Related papers (2022-09-27T08:45:56Z) - Stratified Transformer for 3D Point Cloud Segmentation [89.9698499437732]
Stratified Transformer is able to capture long-range contexts and demonstrates strong generalization ability and high performance.
To combat the challenges posed by irregular point arrangements, we propose first-layer point embedding to aggregate local information.
Experiments demonstrate the effectiveness and superiority of our method on S3DIS, ScanNetv2 and ShapeNetPart datasets.
arXiv Detail & Related papers (2022-03-28T05:35:16Z) - RIConv++: Effective Rotation Invariant Convolutions for 3D Point Clouds
Deep Learning [32.18566879365623]
3D point clouds deep learning is a promising field of research that allows a neural network to learn features of point clouds directly.
We propose a simple yet effective convolution operator that enhances feature distinction by designing powerful rotation invariant features from the local regions.
Our network architecture can capture both local and global context by simply tuning the neighborhood size in each convolution layer.
arXiv Detail & Related papers (2022-02-26T08:32:44Z) - UPDesc: Unsupervised Point Descriptor Learning for Robust Registration [54.95201961399334]
UPDesc is an unsupervised method to learn point descriptors for robust point cloud registration.
We show that our learned descriptors yield superior performance over existing unsupervised methods.
arXiv Detail & Related papers (2021-08-05T17:11:08Z) - PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features [91.2054994193218]
We propose a point-set learning framework PRIN, focusing on rotation invariant feature extraction in point clouds analysis.
In addition, we extend PRIN to a sparse version called SPRIN, which directly operates on sparse point clouds.
Results show that, on the dataset with randomly rotated point clouds, SPRIN demonstrates better performance than state-of-the-art methods without any data augmentation.
arXiv Detail & Related papers (2021-02-24T06:44:09Z) - ODFNet: Using orientation distribution functions to characterize 3D
point clouds [0.0]
We leverage on point orientation distributions around a point in order to obtain an expressive local neighborhood representation for point clouds.
New ODFNet model achieves state-of-the-art accuracy for object classification on ModelNet40 and ScanObjectNN datasets.
arXiv Detail & Related papers (2020-12-08T19:54:20Z) - Global Context Aware Convolutions for 3D Point Cloud Understanding [32.953907994511376]
We propose a novel convolution operator that enhances feature distinction by integrating global context information from the input point cloud to the convolution.
A convolution can then be performed to transform the points and anchor features into final rotation-invariant features.
arXiv Detail & Related papers (2020-08-07T04:33:27Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.