G-RepsNet: A Fast and General Construction of Equivariant Networks for
Arbitrary Matrix Groups
- URL: http://arxiv.org/abs/2402.15413v1
- Date: Fri, 23 Feb 2024 16:19:49 GMT
- Title: G-RepsNet: A Fast and General Construction of Equivariant Networks for
Arbitrary Matrix Groups
- Authors: Sourya Basu, Suhas Lohit, Matthew Brand
- Abstract summary: Group equivariant networks are useful in a wide range of deep learning tasks.
Here, we introduce Group Representation Networks (G-RepsNets), a lightweight equivariant network for arbitrary groups.
We show that G-RepsNet is competitive to G-FNO (Helwig et al., 2023) and EGNN (Satorras et al., 2023) on N-body predictions and solving PDEs, respectively.
- Score: 8.24167511378417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Group equivariance is a strong inductive bias useful in a wide range of deep
learning tasks. However, constructing efficient equivariant networks for
general groups and domains is difficult. Recent work by Finzi et al. (2021)
directly solves the equivariance constraint for arbitrary matrix groups to
obtain equivariant MLPs (EMLPs). But this method does not scale well and
scaling is crucial in deep learning. Here, we introduce Group Representation
Networks (G-RepsNets), a lightweight equivariant network for arbitrary matrix
groups with features represented using tensor polynomials. The key intuition
for our design is that using tensor representations in the hidden layers of a
neural network along with simple inexpensive tensor operations can lead to
expressive universal equivariant networks. We find G-RepsNet to be competitive
to EMLP on several tasks with group symmetries such as O(5), O(1, 3), and O(3)
with scalars, vectors, and second-order tensors as data types. On image
classification tasks, we find that G-RepsNet using second-order representations
is competitive and often even outperforms sophisticated state-of-the-art
equivariant models such as GCNNs (Cohen & Welling, 2016a) and E(2)-CNNs (Weiler
& Cesa, 2019). To further illustrate the generality of our approach, we show
that G-RepsNet is competitive to G-FNO (Helwig et al., 2023) and EGNN (Satorras
et al., 2021) on N-body predictions and solving PDEs, respectively, while being
efficient.
Related papers
- Learnable Commutative Monoids for Graph Neural Networks [0.0]
Graph neural networks (GNNs) are highly sensitive to the choice of aggregation function.
We show that GNNs equipped with recurrent aggregators are competitive with state-of-the-art permutation-invariant aggregators.
We propose a framework for constructing learnable, commutative, associative binary operators.
arXiv Detail & Related papers (2022-12-16T15:43:41Z) - Equivariant Transduction through Invariant Alignment [71.45263447328374]
We introduce a novel group-equivariant architecture that incorporates a group-in hard alignment mechanism.
We find that our network's structure allows it to develop stronger equivariant properties than existing group-equivariant approaches.
We additionally find that it outperforms previous group-equivariant networks empirically on the SCAN task.
arXiv Detail & Related papers (2022-09-22T11:19:45Z) - A General Framework For Proving The Equivariant Strong Lottery Ticket
Hypothesis [15.376680573592997]
Modern neural networks are capable of incorporating more than just translation symmetry.
We generalize the Strong Lottery Ticket Hypothesis (SLTH) to functions that preserve the action of the group $G$.
We prove our theory by overparametrized $textE(2)$-steerable CNNs and message passing GNNs.
arXiv Detail & Related papers (2022-06-09T04:40:18Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Exploiting Redundancy: Separable Group Convolutional Networks on Lie
Groups [14.029933823101084]
Group convolutional neural networks (G-CNNs) have been shown to increase parameter efficiency and model accuracy.
In this work, we investigate the properties of representations learned by regular G-CNNs, and show considerable parameter redundancy in group convolution kernels.
We introduce convolution kernels that are separable over the subgroup and channel dimensions.
arXiv Detail & Related papers (2021-10-25T15:56:53Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - A Practical Method for Constructing Equivariant Multilayer Perceptrons
for Arbitrary Matrix Groups [115.58550697886987]
We provide a completely general algorithm for solving for the equivariant layers of matrix groups.
In addition to recovering solutions from other works as special cases, we construct multilayer perceptrons equivariant to multiple groups that have never been tackled before.
Our approach outperforms non-equivariant baselines, with applications to particle physics and dynamical systems.
arXiv Detail & Related papers (2021-04-19T17:21:54Z) - Group Equivariant Neural Architecture Search via Group Decomposition and
Reinforcement Learning [17.291131923335918]
We prove a new group-theoretic result in the context of equivariant neural networks.
We also design an algorithm to construct equivariant networks that significantly improves computational complexity.
We use deep Q-learning to search for group equivariant networks that maximize performance.
arXiv Detail & Related papers (2021-04-10T19:37:25Z) - Permutation-equivariant and Proximity-aware Graph Neural Networks with
Stochastic Message Passing [88.30867628592112]
Graph neural networks (GNNs) are emerging machine learning models on graphs.
Permutation-equivariance and proximity-awareness are two important properties highly desirable for GNNs.
We show that existing GNNs, mostly based on the message-passing mechanism, cannot simultaneously preserve the two properties.
In order to preserve node proximities, we augment the existing GNNs with node representations.
arXiv Detail & Related papers (2020-09-05T16:46:56Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z) - Theoretical Aspects of Group Equivariant Neural Networks [9.449391486456209]
Group equivariant neural networks have been explored in the past few years and are interesting from theoretical and practical standpoints.
They leverage concepts from group representation theory, non-commutative harmonic analysis and differential geometry.
In practice, they have been shown to reduce sample and model complexity, notably in challenging tasks where input transformations such as arbitrary rotations are present.
arXiv Detail & Related papers (2020-04-10T17:57:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.