Symmetry-driven graph neural networks
- URL: http://arxiv.org/abs/2105.14058v1
- Date: Fri, 28 May 2021 18:54:12 GMT
- Title: Symmetry-driven graph neural networks
- Authors: Francesco Farina, Emma Slade
- Abstract summary: We introduce two graph network architectures that are equivariant to several types of transformations affecting the node coordinates.
We demonstrate these capabilities on a synthetic dataset composed of $n$-dimensional geometric objects.
- Score: 1.713291434132985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Exploiting symmetries and invariance in data is a powerful, yet not fully
exploited, way to achieve better generalisation with more efficiency. In this
paper, we introduce two graph network architectures that are equivariant to
several types of transformations affecting the node coordinates. First, we
build equivariance to any transformation in the coordinate embeddings that
preserves the distance between neighbouring nodes, allowing for equivariance to
the Euclidean group. Then, we introduce angle attributes to build equivariance
to any angle preserving transformation - thus, to the conformal group. Thanks
to their equivariance properties, the proposed models can be vastly more data
efficient with respect to classical graph architectures, intrinsically equipped
with a better inductive bias and better at generalising. We demonstrate these
capabilities on a synthetic dataset composed of $n$-dimensional geometric
objects. Additionally, we provide examples of their limitations when (the
right) symmetries are not present in the data.
Related papers
- Symmetry Discovery for Different Data Types [52.2614860099811]
Equivariant neural networks incorporate symmetries into their architecture, achieving higher generalization performance.
We propose LieSD, a method for discovering symmetries via trained neural networks which approximate the input-output mappings of the tasks.
We validate the performance of LieSD on tasks with symmetries such as the two-body problem, the moment of inertia matrix prediction, and top quark tagging.
arXiv Detail & Related papers (2024-10-13T13:39:39Z) - Learning Layer-wise Equivariances Automatically using Gradients [66.81218780702125]
Convolutions encode equivariance symmetries into neural networks leading to better generalisation performance.
symmetries provide fixed hard constraints on the functions a network can represent, need to be specified in advance, and can not be adapted.
Our goal is to allow flexible symmetry constraints that can automatically be learned from data using gradients.
arXiv Detail & Related papers (2023-10-09T20:22:43Z) - The Lie Derivative for Measuring Learned Equivariance [84.29366874540217]
We study the equivariance properties of hundreds of pretrained models, spanning CNNs, transformers, and Mixer architectures.
We find that many violations of equivariance can be linked to spatial aliasing in ubiquitous network layers, such as pointwise non-linearities.
For example, transformers can be more equivariant than convolutional neural networks after training.
arXiv Detail & Related papers (2022-10-06T15:20:55Z) - Equivariant Mesh Attention Networks [10.517110532297021]
We present an attention-based architecture for mesh data that is provably equivariant to all transformations mentioned above.
Our results confirm that our proposed architecture is equivariant, and therefore robust, to these local/global transformations.
arXiv Detail & Related papers (2022-05-21T19:53:14Z) - Learning Symmetric Embeddings for Equivariant World Models [9.781637768189158]
We propose learning symmetric embedding networks (SENs) that encode an input space (e.g. images)
This network can be trained end-to-end with an equivariant task network to learn an explicitly symmetric representation.
Our experiments demonstrate that SENs facilitate the application of equivariant networks to data with complex symmetry representations.
arXiv Detail & Related papers (2022-04-24T22:31:52Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - Data efficiency in graph networks through equivariance [1.713291434132985]
We introduce a novel architecture for graph networks which is equivariant to any transformation in the coordinate embeddings.
We show that, learning on a minimal amount of data, the architecture we propose can perfectly generalise to unseen data in a synthetic problem.
arXiv Detail & Related papers (2021-06-25T17:42:34Z) - Beyond permutation equivariance in graph networks [1.713291434132985]
We introduce a novel architecture for graph networks which is equivariant to the Euclidean group in $n$-dimensions.
Our model is designed to work with graph networks in their most general form, thus including particular variants as special cases.
arXiv Detail & Related papers (2021-03-25T18:36:09Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.