Design equivariant neural networks for 3D point cloud
- URL: http://arxiv.org/abs/2205.00630v1
- Date: Mon, 2 May 2022 02:57:13 GMT
- Title: Design equivariant neural networks for 3D point cloud
- Authors: Thuan N.A. Trang, Thieu N. Vo, Khuong D. Nguyen
- Abstract summary: This work seeks to improve the generalization and robustness of existing neural networks for 3D point clouds.
The main challenge when designing equivariant models for point clouds is how to trade-off the performance of the model and the complexity.
The proposed procedure is general and forms a fundamental approach to group equivariant neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work seeks to improve the generalization and robustness of existing
neural networks for 3D point clouds by inducing group equivariance under
general group transformations. The main challenge when designing equivariant
models for point clouds is how to trade-off the performance of the model and
the complexity. Existing equivariant models are either too complicate to
implement or very high complexity. The main aim of this study is to build a
general procedure to introduce group equivariant property to SOTA models for 3D
point clouds. The group equivariant models built form our procedure are simple
to implement, less complexity in comparison with the existing ones, and they
preserve the strengths of the original SOTA backbone. From the results of the
experiments on object classification, it is shown that our methods are superior
to other group equivariant models in performance and complexity. Moreover, our
method also helps to improve the mIoU of semantic segmentation models. Overall,
by using a combination of only-finite-rotation equivariance and augmentation,
our models can outperform existing full $SO(3)$-equivariance models with much
cheaper complexity and GPU memory. The proposed procedure is general and forms
a fundamental approach to group equivariant neural networks. We believe that it
can be easily adapted to other SOTA models in the future.
Related papers
- Equi-GSPR: Equivariant SE(3) Graph Network Model for Sparse Point Cloud Registration [2.814748676983944]
We propose a graph neural network model embedded with a local Spherical Euclidean 3D equivariance property through SE(3) message passing based propagation.
Our model is composed mainly of a descriptor module, equivariant graph layers, match similarity, and the final regression layers.
Experiments conducted on the 3DMatch and KITTI datasets exhibit the compelling and robust performance of our model compared to state-of-the-art approaches.
arXiv Detail & Related papers (2024-10-08T06:48:01Z) - Approximately Equivariant Neural Processes [47.14384085714576]
We consider the use of approximately equivariant architectures in neural processes.
We demonstrate the effectiveness of our approach on a number of synthetic and real-world regression experiments.
arXiv Detail & Related papers (2024-06-19T12:17:14Z) - Efficient Model-Agnostic Multi-Group Equivariant Networks [18.986283562123713]
We provide efficient model-agnostic equivariant designs for two related problems.
One is a network with multiple inputs each with potentially different groups acting on them, and another is a single input but the group acting on it is a large product group.
We find equivariant models are robust to such transformations and perform competitively otherwise.
arXiv Detail & Related papers (2023-10-14T22:24:26Z) - Generalizing Neural Human Fitting to Unseen Poses With Articulated SE(3)
Equivariance [48.39751410262664]
ArtEq is a part-based SE(3)-equivariant neural architecture for SMPL model estimation from point clouds.
Experimental results show that ArtEq generalizes to poses not seen during training, outperforming state-of-the-art methods by 44% in terms of body reconstruction accuracy.
arXiv Detail & Related papers (2023-04-20T17:58:26Z) - Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models [56.88106830869487]
We introduce equi-tuning, a novel fine-tuning method that transforms (potentially non-equivariant) pretrained models into group equivariant models.
We provide applications of equi-tuning on three different tasks: image classification, compositional generalization in language, and fairness in natural language generation.
arXiv Detail & Related papers (2022-10-13T08:45:23Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.