FA-KPConv: Introducing Euclidean Symmetries to KPConv via Frame Averaging
- URL: http://arxiv.org/abs/2505.04485v2
- Date: Thu, 08 May 2025 06:43:49 GMT
- Title: FA-KPConv: Introducing Euclidean Symmetries to KPConv via Frame Averaging
- Authors: Ali Alawieh, Alexandru P. Condurache,
- Abstract summary: We present Frame-Averaging Kernel-Point Convolution (FA-KPConv), a neural network architecture built on top of the well-known KPConv.<n>FA-KPConv embeds geometrical prior knowledge into it while preserving the number of learnable parameters and not compromising any input information.
- Score: 49.1574468325115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Frame-Averaging Kernel-Point Convolution (FA-KPConv), a neural network architecture built on top of the well-known KPConv, a widely adopted backbone for 3D point cloud analysis. Even though invariance and/or equivariance to Euclidean transformations are required for many common tasks, KPConv-based networks can only approximately achieve such properties when training on large datasets or with significant data augmentations. Using Frame Averaging, we allow to flexibly customize point cloud neural networks built with KPConv layers, by making them exactly invariant and/or equivariant to translations, rotations and/or reflections of the input point clouds. By simply wrapping around an existing KPConv-based network, FA-KPConv embeds geometrical prior knowledge into it while preserving the number of learnable parameters and not compromising any input information. We showcase the benefit of such an introduced bias for point cloud classification and point cloud registration, especially in challenging cases such as scarce training data or randomly rotated test data.
Related papers
- Fully-Geometric Cross-Attention for Point Cloud Registration [51.865371511201765]
Point cloud registration approaches often fail when the overlap between point clouds is low due to noisy point correspondences.<n>This work introduces a novel cross-attention mechanism tailored for Transformer-based architectures that tackles this problem.<n>We integrate the Gromov-Wasserstein distance into the cross-attention formulation to jointly compute distances between points across different point clouds.<n>At the point level, we also devise a self-attention mechanism that aggregates the local geometric structure information into point features for fine matching.
arXiv Detail & Related papers (2025-02-12T10:44:36Z) - PointRWKV: Efficient RWKV-Like Model for Hierarchical Point Cloud Learning [56.14518823931901]
We present PointRWKV, a model of linear complexity derived from the RWKV model in the NLP field.
We first propose to explore the global processing capabilities within PointRWKV blocks using modified multi-headed matrix-valued states.
To extract local geometric features simultaneously, we design a parallel branch to encode the point cloud efficiently in a fixed radius near-neighbors graph with a graph stabilizer.
arXiv Detail & Related papers (2024-05-24T05:02:51Z) - SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud
Representation [65.4396959244269]
The paper tackles the challenge by designing a general framework to construct 3D learning architectures.
The proposed approach can be applied to general backbones like PointNet and DGCNN.
Experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation, and accuracy.
arXiv Detail & Related papers (2022-09-13T12:12:19Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - Training or Architecture? How to Incorporate Invariance in Neural
Networks [14.162739081163444]
We propose a method for provably invariant network architectures with respect to group actions.
In a nutshell, we intend to 'undo' any possible transformation before feeding the data into the actual network.
We analyze properties of such approaches, extend them to equivariant networks, and demonstrate their advantages in terms of robustness as well as computational efficiency in several numerical examples.
arXiv Detail & Related papers (2021-06-18T10:31:00Z) - PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on
Point Clouds [33.41204351513122]
PAConv is a generic convolution operation for 3D point cloud processing.
The kernel is built in a data-driven manner, endowing PAConv with more flexibility than 2D convolutions.
Even built on simple networks, our method still approaches or even surpasses the state-of-the-art models.
arXiv Detail & Related papers (2021-03-26T17:52:38Z) - The Devils in the Point Clouds: Studying the Robustness of Point Cloud
Convolutions [15.997907568429177]
This paper investigates different variants of PointConv, a convolution network on point clouds, to examine their robustness to input scale and rotation changes.
We derive a novel viewpoint-invariant descriptor by utilizing 3D geometric properties as the input to PointConv.
Experiments are conducted on the 2D MNIST & CIFAR-10 datasets as well as the 3D Semantic KITTI & ScanNet dataset.
arXiv Detail & Related papers (2021-01-19T19:32:38Z) - Learning Rotation-Invariant Representations of Point Clouds Using
Aligned Edge Convolutional Neural Networks [29.3830445533532]
Point cloud analysis is an area of increasing interest due to the development of 3D sensors that are able to rapidly measure the depth of scenes accurately.
Applying deep learning techniques to perform point cloud analysis is non-trivial due to the inability of these methods to generalize to unseen rotations.
To address this limitation, one usually has to augment the training data, which can lead to extra computation and require larger model complexity.
This paper proposes a new neural network called the Aligned Edge Convolutional Neural Network (AECNN) that learns a feature representation of point clouds relative to Local Reference Frames (LRFs)
arXiv Detail & Related papers (2021-01-02T17:36:00Z) - Permutation Matters: Anisotropic Convolutional Layer for Learning on
Point Clouds [145.79324955896845]
We propose a permutable anisotropic convolutional operation (PAI-Conv) that calculates soft-permutation matrices for each point.
Experiments on point clouds demonstrate that PAI-Conv produces competitive results in classification and semantic segmentation tasks.
arXiv Detail & Related papers (2020-05-27T02:42:29Z) - Rethinking Depthwise Separable Convolutions: How Intra-Kernel
Correlations Lead to Improved MobileNets [6.09170287691728]
We introduce blueprint separable convolutions (BSConv) as highly efficient building blocks for CNNs.
They are motivated by quantitative analyses of kernel properties from trained models.
Our approach provides a thorough theoretical derivation, interpretation, and justification for the application of depthwise separable convolutions.
arXiv Detail & Related papers (2020-03-30T15:23:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.