Vector Neurons: A General Framework for SO(3)-Equivariant Networks
- URL: http://arxiv.org/abs/2104.12229v1
- Date: Sun, 25 Apr 2021 18:48:15 GMT
- Title: Vector Neurons: A General Framework for SO(3)-Equivariant Networks
- Authors: Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea
Tagliasacchi, Leonidas Guibas
- Abstract summary: In this paper, we introduce a general framework built on top of what we call Vector Neuron representations.
Our vector neurons enable a simple mapping of SO(3) actions to latent spaces.
We also show for the first time a rotation equivariant reconstruction network.
- Score: 32.81671803104126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Invariance and equivariance to the rotation group have been widely discussed
in the 3D deep learning community for pointclouds. Yet most proposed methods
either use complex mathematical tools that may limit their accessibility, or
are tied to specific input data types and network architectures. In this paper,
we introduce a general framework built on top of what we call Vector Neuron
representations for creating SO(3)-equivariant neural networks for pointcloud
processing. Extending neurons from 1D scalars to 3D vectors, our vector neurons
enable a simple mapping of SO(3) actions to latent spaces thereby providing a
framework for building equivariance in common neural operations -- including
linear layers, non-linearities, pooling, and normalizations. Due to their
simplicity, vector neurons are versatile and, as we demonstrate, can be
incorporated into diverse network architecture backbones, allowing them to
process geometry inputs in arbitrary poses. Despite its simplicity, our method
performs comparably well in accuracy and generalization with other more complex
and specialized state-of-the-art methods on classification and segmentation
tasks. We also show for the first time a rotation equivariant reconstruction
network.
Related papers
- Leveraging SO(3)-steerable convolutions for pose-robust semantic segmentation in 3D medical data [2.207533492015563]
We present a new family of segmentation networks that use equivariant voxel convolutions based on spherical harmonics.
These networks are robust to data poses not seen during training, and do not require rotation-based data augmentation during training.
We demonstrate improved segmentation performance in MRI brain tumor and healthy brain structure segmentation tasks.
arXiv Detail & Related papers (2023-03-01T09:27:08Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud
Representation [65.4396959244269]
The paper tackles the challenge by designing a general framework to construct 3D learning architectures.
The proposed approach can be applied to general backbones like PointNet and DGCNN.
Experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation, and accuracy.
arXiv Detail & Related papers (2022-09-13T12:12:19Z) - e3nn: Euclidean Neural Networks [3.231986804142223]
e3nn is a framework for creating E(3) equivariant trainable functions, also known as Euclidean neural networks.
e3nn naturally operates on geometry and geometric tensors that describe systems in 3D and transform predictably under a change of coordinate system.
These core operations of e3nn can be used to efficiently articulate Field Networks, 3D Steerable CNNs, Clebsch-Gordan Networks, SE(3) Transformers and other E(3) equivariant networks.
arXiv Detail & Related papers (2022-07-18T21:19:40Z) - VNT-Net: Rotational Invariant Vector Neuron Transformers [3.04585143845864]
We introduce a rotational invariant neural network by combining recently introduced vector neurons with self-attention layers.
Experiments demonstrate that our network efficiently handles 3D point cloud objects in arbitrary poses.
arXiv Detail & Related papers (2022-05-19T16:51:56Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - SpinNet: Learning a General Surface Descriptor for 3D Point Cloud
Registration [57.28608414782315]
We introduce a new, yet conceptually simple, neural architecture, termed SpinNet, to extract local features.
Experiments on both indoor and outdoor datasets demonstrate that SpinNet outperforms existing state-of-the-art techniques.
arXiv Detail & Related papers (2020-11-24T15:00:56Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.