Equivariant Mesh Attention Networks
- URL: http://arxiv.org/abs/2205.10662v1
- Date: Sat, 21 May 2022 19:53:14 GMT
- Title: Equivariant Mesh Attention Networks
- Authors: Sourya Basu, Jose Gallego-Posada, Francesco Vigan\`o, James Rowbottom
and Taco Cohen
- Abstract summary: We present an attention-based architecture for mesh data that is provably equivariant to all transformations mentioned above.
Our results confirm that our proposed architecture is equivariant, and therefore robust, to these local/global transformations.
- Score: 10.517110532297021
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Equivariance to symmetries has proven to be a powerful inductive bias in deep
learning research. Recent works on mesh processing have concentrated on various
kinds of natural symmetries, including translations, rotations, scaling, node
permutations, and gauge transformations. To date, no existing architecture is
equivariant to all of these transformations. Moreover, previous implementations
have not always applied these symmetry transformations to the test dataset.
This inhibits the ability to determine whether the model attains the claimed
equivariance properties. In this paper, we present an attention-based
architecture for mesh data that is provably equivariant to all transformations
mentioned above. We carry out experiments on the FAUST and TOSCA datasets, and
apply the mentioned symmetries to the test set only. Our results confirm that
our proposed architecture is equivariant, and therefore robust, to these
local/global transformations.
Related papers
- Symmetry Discovery for Different Data Types [52.2614860099811]
Equivariant neural networks incorporate symmetries into their architecture, achieving higher generalization performance.
We propose LieSD, a method for discovering symmetries via trained neural networks which approximate the input-output mappings of the tasks.
We validate the performance of LieSD on tasks with symmetries such as the two-body problem, the moment of inertia matrix prediction, and top quark tagging.
arXiv Detail & Related papers (2024-10-13T13:39:39Z) - Using and Abusing Equivariance [10.70891251559827]
We show how Group Equivariant Convolutional Neural Networks use subsampling to learn to break equivariance to their symmetries.
We show that a change in the input dimension of a network as small as a single pixel can be enough for commonly used architectures to become approximately equivariant, rather than exactly.
arXiv Detail & Related papers (2023-08-22T09:49:26Z) - Learning Lie Group Symmetry Transformations with Neural Networks [17.49001206996365]
This work focuses on discovering and characterizing unknown symmetries present in the dataset, namely, Lie group symmetry transformations.
Our goal is to characterize the transformation group and the distribution of the parameter values.
Results showcase the effectiveness of the approach in both these settings.
arXiv Detail & Related papers (2023-07-04T09:23:24Z) - Optimization Dynamics of Equivariant and Augmented Neural Networks [2.7918308693131135]
We investigate the optimization of neural networks on symmetric data.
We compare the strategy of constraining the architecture to be equivariant to that of using data augmentation.
Our analysis reveals that even in the latter situation, stationary points may be unstable for augmented training although they are stable for the manifestly equivariant models.
arXiv Detail & Related papers (2023-03-23T17:26:12Z) - Oracle-Preserving Latent Flows [58.720142291102135]
We develop a methodology for the simultaneous discovery of multiple nontrivial continuous symmetries across an entire labelled dataset.
The symmetry transformations and the corresponding generators are modeled with fully connected neural networks trained with a specially constructed loss function.
The two new elements in this work are the use of a reduced-dimensionality latent space and the generalization to transformations invariant with respect to high-dimensional oracles.
arXiv Detail & Related papers (2023-02-02T00:13:32Z) - Equivariant Disentangled Transformation for Domain Generalization under
Combination Shift [91.38796390449504]
Combinations of domains and labels are not observed during training but appear in the test environment.
We provide a unique formulation of the combination shift problem based on the concepts of homomorphism, equivariance, and a refined definition of disentanglement.
arXiv Detail & Related papers (2022-08-03T12:31:31Z) - Learning Symmetric Embeddings for Equivariant World Models [9.781637768189158]
We propose learning symmetric embedding networks (SENs) that encode an input space (e.g. images)
This network can be trained end-to-end with an equivariant task network to learn an explicitly symmetric representation.
Our experiments demonstrate that SENs facilitate the application of equivariant networks to data with complex symmetry representations.
arXiv Detail & Related papers (2022-04-24T22:31:52Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Symmetry-driven graph neural networks [1.713291434132985]
We introduce two graph network architectures that are equivariant to several types of transformations affecting the node coordinates.
We demonstrate these capabilities on a synthetic dataset composed of $n$-dimensional geometric objects.
arXiv Detail & Related papers (2021-05-28T18:54:12Z) - Meta-Learning Symmetries by Reparameterization [63.85144439337671]
We present a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data.
Our experiments suggest that it can automatically learn to encode equivariances to common transformations used in image processing tasks.
arXiv Detail & Related papers (2020-07-06T17:59:54Z) - Inverse Learning of Symmetries [71.62109774068064]
We learn the symmetry transformation with a model consisting of two latent subspaces.
Our approach is based on the deep information bottleneck in combination with a continuous mutual information regulariser.
Our model outperforms state-of-the-art methods on artificial and molecular datasets.
arXiv Detail & Related papers (2020-02-07T13:48:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.