Global Context Aware Convolutions for 3D Point Cloud Understanding
- URL: http://arxiv.org/abs/2008.02986v1
- Date: Fri, 7 Aug 2020 04:33:27 GMT
- Title: Global Context Aware Convolutions for 3D Point Cloud Understanding
- Authors: Zhiyuan Zhang, Binh-Son Hua, Wei Chen, Yibin Tian, Sai-Kit Yeung
- Abstract summary: We propose a novel convolution operator that enhances feature distinction by integrating global context information from the input point cloud to the convolution.
A convolution can then be performed to transform the points and anchor features into final rotation-invariant features.
- Score: 32.953907994511376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in deep learning for 3D point clouds have shown great
promises in scene understanding tasks thanks to the introduction of convolution
operators to consume 3D point clouds directly in a neural network. Point cloud
data, however, could have arbitrary rotations, especially those acquired from
3D scanning. Recent works show that it is possible to design point cloud
convolutions with rotation invariance property, but such methods generally do
not perform as well as translation-invariant only convolution. We found that a
key reason is that compared to point coordinates, rotation-invariant features
consumed by point cloud convolution are not as distinctive. To address this
problem, we propose a novel convolution operator that enhances feature
distinction by integrating global context information from the input point
cloud to the convolution. To this end, a globally weighted local reference
frame is constructed in each point neighborhood in which the local point set is
decomposed into bins. Anchor points are generated in each bin to represent
global shape features. A convolution can then be performed to transform the
points and anchor features into final rotation-invariant features. We conduct
several experiments on point cloud classification, part segmentation, shape
retrieval, and normals estimation to evaluate our convolution, which achieves
state-of-the-art accuracy under challenging rotations.
Related papers
- ConDaFormer: Disassembled Transformer with Local Structure Enhancement
for 3D Point Cloud Understanding [105.98609765389895]
Transformers have been recently explored for 3D point cloud understanding.
A large number of points, over 0.1 million, make the global self-attention infeasible for point cloud data.
In this paper, we develop a new transformer block, named ConDaFormer.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - Rethinking Rotation Invariance with Point Cloud Registration [18.829454172955202]
We propose an effective framework for rotation invariance learning via three sequential stages, namely rotation-invariant shape encoding, aligned feature integration, and deep feature registration.
Experimental results on 3D shape classification, part segmentation, and retrieval tasks prove the feasibility of our work.
arXiv Detail & Related papers (2022-12-31T08:17:09Z) - RIConv++: Effective Rotation Invariant Convolutions for 3D Point Clouds
Deep Learning [32.18566879365623]
3D point clouds deep learning is a promising field of research that allows a neural network to learn features of point clouds directly.
We propose a simple yet effective convolution operator that enhances feature distinction by designing powerful rotation invariant features from the local regions.
Our network architecture can capture both local and global context by simply tuning the neighborhood size in each convolution layer.
arXiv Detail & Related papers (2022-02-26T08:32:44Z) - SCTN: Sparse Convolution-Transformer Network for Scene Flow Estimation [71.2856098776959]
Estimating 3D motions for point clouds is challenging, since a point cloud is unordered and its density is significantly non-uniform.
We propose a novel architecture named Sparse Convolution-Transformer Network (SCTN) that equips the sparse convolution with the transformer.
We show that the learned relation-based contextual information is rich and helpful for matching corresponding points, benefiting scene flow estimation.
arXiv Detail & Related papers (2021-05-10T15:16:14Z) - PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features [91.2054994193218]
We propose a point-set learning framework PRIN, focusing on rotation invariant feature extraction in point clouds analysis.
In addition, we extend PRIN to a sparse version called SPRIN, which directly operates on sparse point clouds.
Results show that, on the dataset with randomly rotated point clouds, SPRIN demonstrates better performance than state-of-the-art methods without any data augmentation.
arXiv Detail & Related papers (2021-02-24T06:44:09Z) - Robust Kernel-based Feature Representation for 3D Point Cloud Analysis
via Circular Graph Convolutional Network [2.42919716430661]
We present a new local feature description method that is robust to rotation, density, and scale variations.
To improve representations of the local descriptors, we propose a global aggregation method.
Our method shows superior performances when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-12-22T18:02:57Z) - Deep Positional and Relational Feature Learning for Rotation-Invariant
Point Cloud Analysis [107.9979381402172]
We propose a rotation-invariant deep network for point clouds analysis.
The network is hierarchical and relies on two modules: a positional feature embedding block and a relational feature embedding block.
Experiments show state-of-the-art classification and segmentation performances on benchmark datasets.
arXiv Detail & Related papers (2020-11-18T04:16:51Z) - DV-ConvNet: Fully Convolutional Deep Learning on Point Clouds with
Dynamic Voxelization and 3D Group Convolution [0.7340017786387767]
3D point cloud interpretation is a challenging task due to the randomness and sparsity of the component points.
In this work, we draw attention back to the standard 3D convolutions towards an efficient 3D point cloud interpretation.
Our network is able to run and converge at a considerably fast speed, while yields on-par or even better performance compared with the state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2020-09-07T07:45:05Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z) - Quaternion Equivariant Capsule Networks for 3D Point Clouds [58.566467950463306]
We present a 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations.
We connect dynamic routing between capsules to the well-known Weiszfeld algorithm.
Based on our operator, we build a capsule network that disentangles geometry from pose.
arXiv Detail & Related papers (2019-12-27T13:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.