Frame Averaging for Invariant and Equivariant Network Design
- URL: http://arxiv.org/abs/2110.03336v1
- Date: Thu, 7 Oct 2021 11:05:23 GMT
- Title: Frame Averaging for Invariant and Equivariant Network Design
- Authors: Omri Puny, Matan Atzmon, Heli Ben-Hamu, Edward J. Smith, Ishan Misra,
Aditya Grover, Yaron Lipman
- Abstract summary: We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
- Score: 50.87023773850824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many machine learning tasks involve learning functions that are known to be
invariant or equivariant to certain symmetries of the input data. However, it
is often challenging to design neural network architectures that respect these
symmetries while being expressive and computationally efficient. For example,
Euclidean motion invariant/equivariant graph or point cloud neural networks. We
introduce Frame Averaging (FA), a general purpose and systematic framework for
adapting known (backbone) architectures to become invariant or equivariant to
new symmetry types. Our framework builds on the well known group averaging
operator that guarantees invariance or equivariance but is intractable. In
contrast, we observe that for many important classes of symmetries, this
operator can be replaced with an averaging operator over a small subset of the
group elements, called a frame. We show that averaging over a frame guarantees
exact invariance or equivariance while often being much simpler to compute than
averaging over the entire group. Furthermore, we prove that FA-based models
have maximal expressive power in a broad setting and in general preserve the
expressive power of their backbone architectures. Using frame averaging, we
propose a new class of universal Graph Neural Networks (GNNs), universal
Euclidean motion invariant point cloud networks, and Euclidean motion invariant
Message Passing (MP) GNNs. We demonstrate the practical effectiveness of FA on
several applications including point cloud normal estimation, beyond $2$-WL
graph separation, and $n$-body dynamics prediction, achieving state-of-the-art
results in all of these benchmarks.
Related papers
- Relaxing Continuous Constraints of Equivariant Graph Neural Networks for Physical Dynamics Learning [39.25135680793105]
We propose a general Discrete Equivariant Graph Neural Network (DEGNN) that guarantees equivariance to a given discrete point group.
Specifically, we show that such discrete equivariant message passing could be constructed by transforming geometric features into permutation-invariant embeddings.
We show that DEGNN is data efficient, learning with less data, and can generalize across scenarios such as unobserved orientation.
arXiv Detail & Related papers (2024-06-24T03:37:51Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Similarity Equivariant Graph Neural Networks for Homogenization of Metamaterials [3.6443770850509423]
Soft, porous mechanical metamaterials exhibit pattern transformations that may have important applications in soft robotics, sound reduction and biomedicine.
We develop a machine learning-based approach that scales favorably to serve as a surrogate model.
We show that this network is more accurate and data-efficient than graph neural networks with fewer symmetries.
arXiv Detail & Related papers (2024-04-26T12:30:32Z) - FAENet: Frame Averaging Equivariant GNN for Materials Modeling [123.19473575281357]
We introduce a flexible framework relying on frameaveraging (SFA) to make any model E(3)-equivariant or invariant through data transformations.
We prove the validity of our method theoretically and empirically demonstrate its superior accuracy and computational scalability in materials modeling.
arXiv Detail & Related papers (2023-04-28T21:48:31Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Equivariance versus Augmentation for Spherical Images [0.7388859384645262]
We analyze the role of rotational equivariance in convolutional neural networks (CNNs) applied to spherical images.
We compare the performance of the group equivariant networks known as S2CNNs and standard non-equivariant CNNs trained with an increasing amount of data augmentation.
arXiv Detail & Related papers (2022-02-08T16:49:30Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Generalization capabilities of neural networks in lattice applications [0.0]
We investigate the advantages of adopting translationally equivariant neural networks in favor of non-equivariant ones.
We show that our best equivariant architectures can perform and generalize significantly better than their non-equivariant counterparts.
arXiv Detail & Related papers (2021-12-23T11:48:06Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Training or Architecture? How to Incorporate Invariance in Neural
Networks [14.162739081163444]
We propose a method for provably invariant network architectures with respect to group actions.
In a nutshell, we intend to 'undo' any possible transformation before feeding the data into the actual network.
We analyze properties of such approaches, extend them to equivariant networks, and demonstrate their advantages in terms of robustness as well as computational efficiency in several numerical examples.
arXiv Detail & Related papers (2021-06-18T10:31:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.