SPE-Net: Boosting Point Cloud Analysis via Rotation Robustness
Enhancement
- URL: http://arxiv.org/abs/2211.08250v1
- Date: Tue, 15 Nov 2022 15:59:09 GMT
- Title: SPE-Net: Boosting Point Cloud Analysis via Rotation Robustness
Enhancement
- Authors: Zhaofan Qiu and Yehao Li and Yu Wang and Yingwei Pan and Ting Yao and
Tao Mei
- Abstract summary: We propose a novel deep architecture tailored for 3D point cloud applications, named as SPE-Net.
The embedded Selective Position variant' procedure relies on an attention mechanism that can effectively attend to the underlying rotation condition of the input.
We demonstrate the merits of the SPE-Net and the associated hypothesis on four benchmarks, showing evident improvements on both rotated and unrotated test data over SOTA methods.
- Score: 118.20816888815658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel deep architecture tailored for 3D point
cloud applications, named as SPE-Net. The embedded ``Selective Position
Encoding (SPE)'' procedure relies on an attention mechanism that can
effectively attend to the underlying rotation condition of the input. Such
encoded rotation condition then determines which part of the network parameters
to be focused on, and is shown to efficiently help reduce the degree of freedom
of the optimization during training. This mechanism henceforth can better
leverage the rotation augmentations through reduced training difficulties,
making SPE-Net robust against rotated data both during training and testing.
The new findings in our paper also urge us to rethink the relationship between
the extracted rotation information and the actual test accuracy. Intriguingly,
we reveal evidences that by locally encoding the rotation information through
SPE-Net, the rotation-invariant features are still of critical importance in
benefiting the test samples without any actual global rotation. We empirically
demonstrate the merits of the SPE-Net and the associated hypothesis on four
benchmarks, showing evident improvements on both rotated and unrotated test
data over SOTA methods. Source code is available at
https://github.com/ZhaofanQiu/SPE-Net.
Related papers
- Rotation Perturbation Robustness in Point Cloud Analysis: A Perspective of Manifold Distillation [10.14368825342757]
This paper remodels the point cloud from the perspective of manifold and designs a manifold distillation method to achieve the robustness of rotation perturbation.
Experiments carried out on four different datasets verify the effectiveness of our method.
arXiv Detail & Related papers (2024-11-04T02:13:41Z) - RIDE: Boosting 3D Object Detection for LiDAR Point Clouds via Rotation-Invariant Analysis [15.42293045246587]
RIDE is a pioneering exploration of Rotation-Invariance for the 3D LiDAR-point-based object DEtector.
We design a bi-feature extractor that extracts (i) object-aware features though sensitive to rotation but preserve geometry well, and (ii) rotation-invariant features, which lose geometric information to a certain extent but are robust to rotation.
Our RIDE is compatible and easy to plug into the existing one-stage and two-stage 3D detectors, and boosts both detection performance and rotation robustness.
arXiv Detail & Related papers (2024-08-28T08:53:33Z) - PARE-Net: Position-Aware Rotation-Equivariant Networks for Robust Point Cloud Registration [8.668461141536383]
Learning rotation-invariant distinctive features is a fundamental requirement for point cloud registration.
Existing methods often use rotation-sensitive networks to extract features, while employing rotation augmentation to learn an approximate invariant mapping rudely.
We propose a novel position-aware rotation-equivariant network, for efficient, light-weighted, and robust registration.
arXiv Detail & Related papers (2024-07-14T10:26:38Z) - ART-Point: Improving Rotation Robustness of Point Cloud Classifiers via
Adversarial Rotation [89.47574181669903]
In this study, we show that the rotation robustness of point cloud classifiers can also be acquired via adversarial training.
Specifically, our proposed framework named ART-Point regards the rotation of the point cloud as an attack.
We propose a fast one-step optimization to efficiently reach the final robust model.
arXiv Detail & Related papers (2022-03-08T07:20:16Z) - Functional Regularization for Reinforcement Learning via Learned Fourier
Features [98.90474131452588]
We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis.
We show that it improves the sample efficiency of both state-based and image-based RL.
arXiv Detail & Related papers (2021-12-06T18:59:52Z) - PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features [91.2054994193218]
We propose a point-set learning framework PRIN, focusing on rotation invariant feature extraction in point clouds analysis.
In addition, we extend PRIN to a sparse version called SPRIN, which directly operates on sparse point clouds.
Results show that, on the dataset with randomly rotated point clouds, SPRIN demonstrates better performance than state-of-the-art methods without any data augmentation.
arXiv Detail & Related papers (2021-02-24T06:44:09Z) - Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes [86.2129580231191]
Adjoint Rigid Transform (ART) Network is a neural module which can be integrated with a variety of 3D networks.
ART learns to rotate input shapes to a learned canonical orientation, which is crucial for a lot of tasks.
We will release our code and pre-trained models for further research.
arXiv Detail & Related papers (2021-02-01T20:58:45Z) - Learning Rotation-Invariant Representations of Point Clouds Using
Aligned Edge Convolutional Neural Networks [29.3830445533532]
Point cloud analysis is an area of increasing interest due to the development of 3D sensors that are able to rapidly measure the depth of scenes accurately.
Applying deep learning techniques to perform point cloud analysis is non-trivial due to the inability of these methods to generalize to unseen rotations.
To address this limitation, one usually has to augment the training data, which can lead to extra computation and require larger model complexity.
This paper proposes a new neural network called the Aligned Edge Convolutional Neural Network (AECNN) that learns a feature representation of point clouds relative to Local Reference Frames (LRFs)
arXiv Detail & Related papers (2021-01-02T17:36:00Z) - A Smooth Representation of Belief over SO(3) for Deep Rotation Learning
with Uncertainty [33.627068152037815]
We present a novel symmetric matrix representation of the 3D rotation group, SO(3), with two important properties that make it particularly suitable for learned models.
We empirically validate the benefits of our formulation by training deep neural rotation regressors on two data modalities.
This capability is key for safety-critical applications where detecting novel inputs can prevent catastrophic failure of learned models.
arXiv Detail & Related papers (2020-06-01T15:57:45Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.