Equivariant Point Cloud Analysis via Learning Orientations for Message
Passing
- URL: http://arxiv.org/abs/2203.14486v1
- Date: Mon, 28 Mar 2022 04:10:13 GMT
- Title: Equivariant Point Cloud Analysis via Learning Orientations for Message
Passing
- Authors: Shitong Luo, Jiahan Li, Jiaqi Guan, Yufeng Su, Chaoran Cheng, Jian
Peng, Jianzhu Ma
- Abstract summary: We propose a novel framework to achieve equivariance for point cloud analysis based on the message passing (graph neural network) scheme.
We find the equivariant property could be obtained by introducing an orientation for each point to decouple the relative position for each point from the global pose of the entire point cloud.
Before aggregating information from the neighbors of a point, the networks transforms the neighbors' coordinates based on the point's learned orientations.
- Score: 17.049105822164865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Equivariance has been a long-standing concern in various fields ranging from
computer vision to physical modeling. Most previous methods struggle with
generality, simplicity, and expressiveness -- some are designed ad hoc for
specific data types, some are too complex to be accessible, and some sacrifice
flexible transformations. In this work, we propose a novel and simple framework
to achieve equivariance for point cloud analysis based on the message passing
(graph neural network) scheme. We find the equivariant property could be
obtained by introducing an orientation for each point to decouple the relative
position for each point from the global pose of the entire point cloud.
Therefore, we extend current message passing networks with a module that learns
orientations for each point. Before aggregating information from the neighbors
of a point, the networks transforms the neighbors' coordinates based on the
point's learned orientations. We provide formal proofs to show the equivariance
of the proposed framework. Empirically, we demonstrate that our proposed method
is competitive on both point cloud analysis and physical modeling tasks. Code
is available at https://github.com/luost26/Equivariant-OrientedMP .
Related papers
- GeoMAE: Masked Geometric Target Prediction for Self-supervised Point
Cloud Pre-Training [16.825524577372473]
We introduce a point cloud representation learning framework, based on geometric feature reconstruction.
We identify three self-supervised learning objectives to peculiar point clouds, namely centroid prediction, normal estimation, and curvature prediction.
Our pipeline is conceptually simple and it consists of two major steps: first, it randomly masks out groups of points, followed by a Transformer-based point cloud encoder.
arXiv Detail & Related papers (2023-05-15T17:14:55Z) - A Simple Strategy to Provable Invariance via Orbit Mapping [14.127786615513978]
We propose a method to make network architectures provably invariant with respect to group actions.
In a nutshell, we intend to 'undo' any possible transformation before feeding the data into the actual network.
arXiv Detail & Related papers (2022-09-24T03:40:42Z) - GFNet: Geometric Flow Network for 3D Point Cloud Semantic Segmentation [91.15865862160088]
We introduce a geometric flow network (GFNet) to explore the geometric correspondence between different views in an align-before-fuse manner.
Specifically, we devise a novel geometric flow module (GFM) to bidirectionally align and propagate the complementary information across different views.
arXiv Detail & Related papers (2022-07-06T11:48:08Z) - SemAffiNet: Semantic-Affine Transformation for Point Cloud Segmentation [94.11915008006483]
We propose SemAffiNet for point cloud semantic segmentation.
We conduct extensive experiments on the ScanNetV2 and NYUv2 datasets.
arXiv Detail & Related papers (2022-05-26T17:00:23Z) - SE(3)-Equivariant Attention Networks for Shape Reconstruction in
Function Space [50.14426188851305]
We propose the first SE(3)-equivariant coordinate-based network for learning occupancy fields from point clouds.
In contrast to previous shape reconstruction methods that align the input to a regular grid, we operate directly on the irregular, unoriented point cloud.
We show that our method outperforms previous SO(3)-equivariant methods, as well as non-equivariant methods trained on SO(3)-augmented datasets.
arXiv Detail & Related papers (2022-04-05T17:59:15Z) - PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features [91.2054994193218]
We propose a point-set learning framework PRIN, focusing on rotation invariant feature extraction in point clouds analysis.
In addition, we extend PRIN to a sparse version called SPRIN, which directly operates on sparse point clouds.
Results show that, on the dataset with randomly rotated point clouds, SPRIN demonstrates better performance than state-of-the-art methods without any data augmentation.
arXiv Detail & Related papers (2021-02-24T06:44:09Z) - Deep Positional and Relational Feature Learning for Rotation-Invariant
Point Cloud Analysis [107.9979381402172]
We propose a rotation-invariant deep network for point clouds analysis.
The network is hierarchical and relies on two modules: a positional feature embedding block and a relational feature embedding block.
Experiments show state-of-the-art classification and segmentation performances on benchmark datasets.
arXiv Detail & Related papers (2020-11-18T04:16:51Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.