PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features
- URL: http://arxiv.org/abs/2102.12093v1
- Date: Wed, 24 Feb 2021 06:44:09 GMT
- Title: PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features
- Authors: Yang You, Yujing Lou, Ruoxi Shi, Qi Liu, Yu-Wing Tai, Lizhuang Ma,
Weiming Wang, Cewu Lu
- Abstract summary: We propose a point-set learning framework PRIN, focusing on rotation invariant feature extraction in point clouds analysis.
In addition, we extend PRIN to a sparse version called SPRIN, which directly operates on sparse point clouds.
Results show that, on the dataset with randomly rotated point clouds, SPRIN demonstrates better performance than state-of-the-art methods without any data augmentation.
- Score: 91.2054994193218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud analysis without pose priors is very challenging in real
applications, as the orientations of point clouds are often unknown. In this
paper, we propose a brand new point-set learning framework PRIN, namely,
Point-wise Rotation Invariant Network, focusing on rotation invariant feature
extraction in point clouds analysis. We construct spherical signals by Density
Aware Adaptive Sampling to deal with distorted point distributions in spherical
space. Spherical Voxel Convolution and Point Re-sampling are proposed to
extract rotation invariant features for each point. In addition, we extend PRIN
to a sparse version called SPRIN, which directly operates on sparse point
clouds. Both PRIN and SPRIN can be applied to tasks ranging from object
classification, part segmentation, to 3D feature matching and label alignment.
Results show that, on the dataset with randomly rotated point clouds, SPRIN
demonstrates better performance than state-of-the-art methods without any data
augmentation. We also provide thorough theoretical proof and analysis for
point-wise rotation invariance achieved by our methods. Our code is available
on https://github.com/qq456cvb/SPRIN.
Related papers
- CRIN: Rotation-Invariant Point Cloud Analysis and Rotation Estimation
via Centrifugal Reference Frame [60.24797081117877]
We propose the CRIN, namely Centrifugal Rotation-Invariant Network.
CRIN directly takes the coordinates of points as input and transforms local points into rotation-invariant representations.
A continuous distribution for 3D rotations based on points is introduced.
arXiv Detail & Related papers (2023-03-06T13:14:10Z) - General Rotation Invariance Learning for Point Clouds via Weight-Feature
Alignment [40.421478916432676]
We propose Weight-Feature Alignment (WFA) to construct a local Invariant Reference Frame (IRF)
Our WFA algorithm provides a general solution for the point clouds of all scenes.
arXiv Detail & Related papers (2023-02-20T11:08:07Z) - RIConv++: Effective Rotation Invariant Convolutions for 3D Point Clouds
Deep Learning [32.18566879365623]
3D point clouds deep learning is a promising field of research that allows a neural network to learn features of point clouds directly.
We propose a simple yet effective convolution operator that enhances feature distinction by designing powerful rotation invariant features from the local regions.
Our network architecture can capture both local and global context by simply tuning the neighborhood size in each convolution layer.
arXiv Detail & Related papers (2022-02-26T08:32:44Z) - PU-Flow: a Point Cloud Upsampling Networkwith Normalizing Flows [58.96306192736593]
We present PU-Flow, which incorporates normalizing flows and feature techniques to produce dense points uniformly distributed on the underlying surface.
Specifically, we formulate the upsampling process as point in a latent space, where the weights are adaptively learned from local geometric context.
We show that our method outperforms state-of-the-art deep learning-based approaches in terms of reconstruction quality, proximity-to-surface accuracy, and computation efficiency.
arXiv Detail & Related papers (2021-07-13T07:45:48Z) - Robust Kernel-based Feature Representation for 3D Point Cloud Analysis
via Circular Graph Convolutional Network [2.42919716430661]
We present a new local feature description method that is robust to rotation, density, and scale variations.
To improve representations of the local descriptors, we propose a global aggregation method.
Our method shows superior performances when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-12-22T18:02:57Z) - Deep Positional and Relational Feature Learning for Rotation-Invariant
Point Cloud Analysis [107.9979381402172]
We propose a rotation-invariant deep network for point clouds analysis.
The network is hierarchical and relies on two modules: a positional feature embedding block and a relational feature embedding block.
Experiments show state-of-the-art classification and segmentation performances on benchmark datasets.
arXiv Detail & Related papers (2020-11-18T04:16:51Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z) - Quaternion Equivariant Capsule Networks for 3D Point Clouds [58.566467950463306]
We present a 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations.
We connect dynamic routing between capsules to the well-known Weiszfeld algorithm.
Based on our operator, we build a capsule network that disentangles geometry from pose.
arXiv Detail & Related papers (2019-12-27T13:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.