e3nn: Euclidean Neural Networks
- URL: http://arxiv.org/abs/2207.09453v1
- Date: Mon, 18 Jul 2022 21:19:40 GMT
- Title: e3nn: Euclidean Neural Networks
- Authors: Mario Geiger and Tess Smidt
- Abstract summary: e3nn is a framework for creating E(3) equivariant trainable functions, also known as Euclidean neural networks.
e3nn naturally operates on geometry and geometric tensors that describe systems in 3D and transform predictably under a change of coordinate system.
These core operations of e3nn can be used to efficiently articulate Field Networks, 3D Steerable CNNs, Clebsch-Gordan Networks, SE(3) Transformers and other E(3) equivariant networks.
- Score: 3.231986804142223
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present e3nn, a generalized framework for creating E(3) equivariant
trainable functions, also known as Euclidean neural networks. e3nn naturally
operates on geometry and geometric tensors that describe systems in 3D and
transform predictably under a change of coordinate system. The core of e3nn are
equivariant operations such as the TensorProduct class or the spherical
harmonics functions that can be composed to create more complex modules such as
convolutions and attention mechanisms. These core operations of e3nn can be
used to efficiently articulate Tensor Field Networks, 3D Steerable CNNs,
Clebsch-Gordan Networks, SE(3) Transformers and other E(3) equivariant
networks.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - A Hitchhiker's Guide to Geometric GNNs for 3D Atomic Systems [87.30652640973317]
Recent advances in computational modelling of atomic systems represent them as geometric graphs with atoms embedded as nodes in 3D Euclidean space.
Geometric Graph Neural Networks have emerged as the preferred machine learning architecture powering applications ranging from protein structure prediction to molecular simulations and material generation.
This paper provides a comprehensive and self-contained overview of the field of Geometric GNNs for 3D atomic systems.
arXiv Detail & Related papers (2023-12-12T18:44:19Z) - SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud
Representation [65.4396959244269]
The paper tackles the challenge by designing a general framework to construct 3D learning architectures.
The proposed approach can be applied to general backbones like PointNet and DGCNN.
Experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation, and accuracy.
arXiv Detail & Related papers (2022-09-13T12:12:19Z) - PDO-s3DCNNs: Partial Differential Operator Based Steerable 3D CNNs [69.85869748832127]
In this work, we employ partial differential operators (PDOs) to model 3D filters, and derive general steerable 3D CNNs called PDO-s3DCNNs.
We prove that the equivariant filters are subject to linear constraints, which can be solved efficiently under various conditions.
arXiv Detail & Related papers (2022-08-07T13:37:29Z) - VNT-Net: Rotational Invariant Vector Neuron Transformers [3.04585143845864]
We introduce a rotational invariant neural network by combining recently introduced vector neurons with self-attention layers.
Experiments demonstrate that our network efficiently handles 3D point cloud objects in arbitrary poses.
arXiv Detail & Related papers (2022-05-19T16:51:56Z) - Vector Neurons: A General Framework for SO(3)-Equivariant Networks [32.81671803104126]
In this paper, we introduce a general framework built on top of what we call Vector Neuron representations.
Our vector neurons enable a simple mapping of SO(3) actions to latent spaces.
We also show for the first time a rotation equivariant reconstruction network.
arXiv Detail & Related papers (2021-04-25T18:48:15Z) - Equivariant Point Network for 3D Point Cloud Analysis [17.689949017410836]
We propose an effective and practical SE(3) (3D translation and rotation) equivariant network for point cloud analysis.
First, we present SE(3) separable point convolution, a novel framework that breaks down the 6D convolution into two separable convolutional operators.
Second, we introduce an attention layer to effectively harness the expressiveness of the equivariant features.
arXiv Detail & Related papers (2021-03-25T21:57:10Z) - Spherical Transformer: Adapting Spherical Signal to CNNs [53.18482213611481]
Spherical Transformer can transform spherical signals into vectors that can be directly processed by standard CNNs.
We evaluate our approach on the tasks of spherical MNIST recognition, 3D object classification and omnidirectional image semantic segmentation.
arXiv Detail & Related papers (2021-01-11T12:33:16Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.