Attentive Rotation Invariant Convolution for Point Cloud-based Large
Scale Place Recognition
- URL: http://arxiv.org/abs/2108.12790v1
- Date: Sun, 29 Aug 2021 09:10:56 GMT
- Title: Attentive Rotation Invariant Convolution for Point Cloud-based Large
Scale Place Recognition
- Authors: Zhaoxin Fan, Zhenbo Song, Wenping Zhang, Hongyan Liu, Jun He, and
Xiaoyong Du
- Abstract summary: We propose an Attentive Rotation Invariant Convolution (ARIConv) in this paper.
We experimentally demonstrate that our model can achieve state-of-the-art performance on large scale place recognition task when the point cloud scans are rotated.
- Score: 11.433270318356675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous Driving and Simultaneous Localization and Mapping(SLAM) are
becoming increasingly important in real world, where point cloud-based large
scale place recognition is the spike of them. Previous place recognition
methods have achieved acceptable performances by regarding the task as a point
cloud retrieval problem. However, all of them are suffered from a common
defect: they can't handle the situation when the point clouds are rotated,
which is common, e.g, when viewpoints or motorcycle types are changed. To
tackle this issue, we propose an Attentive Rotation Invariant Convolution
(ARIConv) in this paper. The ARIConv adopts three kind of Rotation Invariant
Features (RIFs): Spherical Signals (SS), Individual-Local Rotation Invariant
Features (ILRIF) and Group-Local Rotation Invariant features (GLRIF) in its
structure to learn rotation invariant convolutional kernels, which are robust
for learning rotation invariant point cloud features. What's more, to highlight
pivotal RIFs, we inject an attentive module in ARIConv to give different RIFs
different importance when learning kernels. Finally, utilizing ARIConv, we
build a DenseNet-like network architecture to learn rotation-insensitive global
descriptors used for retrieving. We experimentally demonstrate that our model
can achieve state-of-the-art performance on large scale place recognition task
when the point cloud scans are rotated and can achieve comparable results with
most of existing methods on the original non-rotated datasets.
Related papers
- PARE-Net: Position-Aware Rotation-Equivariant Networks for Robust Point Cloud Registration [8.668461141536383]
Learning rotation-invariant distinctive features is a fundamental requirement for point cloud registration.
Existing methods often use rotation-sensitive networks to extract features, while employing rotation augmentation to learn an approximate invariant mapping rudely.
We propose a novel position-aware rotation-equivariant network, for efficient, light-weighted, and robust registration.
arXiv Detail & Related papers (2024-07-14T10:26:38Z) - Rotation-Invariant Transformer for Point Cloud Matching [42.5714375149213]
We introduce RoITr, a Rotation-Invariant Transformer to cope with the pose variations in the point cloud matching task.
We propose a global transformer with rotation-invariant cross-frame spatial awareness learned by the self-attention mechanism.
RoITr surpasses the existing methods by at least 13 and 5 percentage points in terms of Inlier Ratio and Registration Recall.
arXiv Detail & Related papers (2023-03-14T20:55:27Z) - Adaptive Rotated Convolution for Rotated Object Detection [96.94590550217718]
We present Adaptive Rotated Convolution (ARC) module to handle rotated object detection problem.
In our ARC module, the convolution kernels rotate adaptively to extract object features with varying orientations in different images.
The proposed approach achieves state-of-the-art performance on the DOTA dataset with 81.77% mAP.
arXiv Detail & Related papers (2023-03-14T11:53:12Z) - CRIN: Rotation-Invariant Point Cloud Analysis and Rotation Estimation
via Centrifugal Reference Frame [60.24797081117877]
We propose the CRIN, namely Centrifugal Rotation-Invariant Network.
CRIN directly takes the coordinates of points as input and transforms local points into rotation-invariant representations.
A continuous distribution for 3D rotations based on points is introduced.
arXiv Detail & Related papers (2023-03-06T13:14:10Z) - Rethinking Rotation Invariance with Point Cloud Registration [18.829454172955202]
We propose an effective framework for rotation invariance learning via three sequential stages, namely rotation-invariant shape encoding, aligned feature integration, and deep feature registration.
Experimental results on 3D shape classification, part segmentation, and retrieval tasks prove the feasibility of our work.
arXiv Detail & Related papers (2022-12-31T08:17:09Z) - ReF -- Rotation Equivariant Features for Local Feature Matching [30.459559206664427]
We propose an alternative, complementary approach that centers on inducing bias in the model architecture itself to generate rotation-specific' features.
We demonstrate that this high performance, rotation-specific coverage from the steerable CNNs can be expanded to all rotation angles.
We present a detailed analysis of the performance effects of ensembling, robust estimation, network architecture variations, and the use of rotation priors.
arXiv Detail & Related papers (2022-03-10T07:36:09Z) - ART-Point: Improving Rotation Robustness of Point Cloud Classifiers via
Adversarial Rotation [89.47574181669903]
In this study, we show that the rotation robustness of point cloud classifiers can also be acquired via adversarial training.
Specifically, our proposed framework named ART-Point regards the rotation of the point cloud as an attack.
We propose a fast one-step optimization to efficiently reach the final robust model.
arXiv Detail & Related papers (2022-03-08T07:20:16Z) - RIConv++: Effective Rotation Invariant Convolutions for 3D Point Clouds
Deep Learning [32.18566879365623]
3D point clouds deep learning is a promising field of research that allows a neural network to learn features of point clouds directly.
We propose a simple yet effective convolution operator that enhances feature distinction by designing powerful rotation invariant features from the local regions.
Our network architecture can capture both local and global context by simply tuning the neighborhood size in each convolution layer.
arXiv Detail & Related papers (2022-02-26T08:32:44Z) - PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features [91.2054994193218]
We propose a point-set learning framework PRIN, focusing on rotation invariant feature extraction in point clouds analysis.
In addition, we extend PRIN to a sparse version called SPRIN, which directly operates on sparse point clouds.
Results show that, on the dataset with randomly rotated point clouds, SPRIN demonstrates better performance than state-of-the-art methods without any data augmentation.
arXiv Detail & Related papers (2021-02-24T06:44:09Z) - Deep Positional and Relational Feature Learning for Rotation-Invariant
Point Cloud Analysis [107.9979381402172]
We propose a rotation-invariant deep network for point clouds analysis.
The network is hierarchical and relies on two modules: a positional feature embedding block and a relational feature embedding block.
Experiments show state-of-the-art classification and segmentation performances on benchmark datasets.
arXiv Detail & Related papers (2020-11-18T04:16:51Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.