A General Framework for Equivariant Neural Networks on Reductive Lie
Groups
- URL: http://arxiv.org/abs/2306.00091v1
- Date: Wed, 31 May 2023 18:09:37 GMT
- Title: A General Framework for Equivariant Neural Networks on Reductive Lie
Groups
- Authors: Ilyes Batatia, Mario Geiger, Jose Munoz, Tess Smidt, Lior Silberman,
Christoph Ortner
- Abstract summary: Reductive Lie Groups play essential roles across scientific fields as diverse as high energy physics, quantum mechanics, quantum chromodynamics, molecular dynamics, computer vision, and imaging.
We present a general Equivariant Neural Network architecture capable of respecting the finite-dimensional representations of any reductive Lie Group G.
- Score: 2.0769531810371307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reductive Lie Groups, such as the orthogonal groups, the Lorentz group, or
the unitary groups, play essential roles across scientific fields as diverse as
high energy physics, quantum mechanics, quantum chromodynamics, molecular
dynamics, computer vision, and imaging. In this paper, we present a general
Equivariant Neural Network architecture capable of respecting the symmetries of
the finite-dimensional representations of any reductive Lie Group G. Our
approach generalizes the successful ACE and MACE architectures for atomistic
point clouds to any data equivariant to a reductive Lie group action. We also
introduce the lie-nn software library, which provides all the necessary tools
to develop and implement such general G-equivariant neural networks. It
implements routines for the reduction of generic tensor products of
representations into irreducible representations, making it easy to apply our
architecture to a wide range of problems and groups. The generality and
performance of our approach are demonstrated by applying it to the tasks of top
quark decay tagging (Lorentz group) and shape recognition (orthogonal group).
Related papers
- Learning Invariant Representations of Graph Neural Networks via Cluster
Generalization [58.68231635082891]
Graph neural networks (GNNs) have become increasingly popular in modeling graph-structured data.
In this paper, we experimentally find that the performance of GNNs drops significantly when the structure shift happens.
We propose the Cluster Information Transfer (CIT) mechanism, which can learn invariant representations for GNNs.
arXiv Detail & Related papers (2024-03-06T10:36:56Z) - Joint Group Invariant Functions on Data-Parameter Domain Induce
Universal Neural Networks [14.45619075342763]
We present a systematic method to induce a generalized neural network and its right inverse operator, called the ridgelet transform.
Since the ridgelet transform is an inverse, it can describe the arrangement of parameters for the network to represent a target function.
We present a new simple proof of the universality by using Schur's lemma in a unified manner covering a wide class of networks.
arXiv Detail & Related papers (2023-10-05T13:30:37Z) - Bispectral Neural Networks [1.0323063834827415]
We present a neural network architecture, Bispectral Neural Networks (BNNs)
BNNs are able to simultaneously learn groups, their irreducible representations, and corresponding equivariant and complete-invariant maps.
arXiv Detail & Related papers (2022-09-07T18:34:48Z) - Generalization capabilities of neural networks in lattice applications [0.0]
We investigate the advantages of adopting translationally equivariant neural networks in favor of non-equivariant ones.
We show that our best equivariant architectures can perform and generalize significantly better than their non-equivariant counterparts.
arXiv Detail & Related papers (2021-12-23T11:48:06Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - A Practical Method for Constructing Equivariant Multilayer Perceptrons
for Arbitrary Matrix Groups [115.58550697886987]
We provide a completely general algorithm for solving for the equivariant layers of matrix groups.
In addition to recovering solutions from other works as special cases, we construct multilayer perceptrons equivariant to multiple groups that have never been tackled before.
Our approach outperforms non-equivariant baselines, with applications to particle physics and dynamical systems.
arXiv Detail & Related papers (2021-04-19T17:21:54Z) - Generalization capabilities of translationally equivariant neural
networks [0.0]
In this work, we focus on complex scalar field theory on a two-dimensional lattice and investigate the benefits of using group equivariant convolutional neural network architectures.
For a meaningful comparison, we conduct a systematic search for equivariant and non-equivariant neural network architectures and apply them to various regression and classification tasks.
We demonstrate that our best equivariant architectures can perform and generalize significantly better than their non-equivariant counterparts.
arXiv Detail & Related papers (2021-03-26T18:53:36Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Lorentz Group Equivariant Neural Network for Particle Physics [58.56031187968692]
We present a neural network architecture that is fully equivariant with respect to transformations under the Lorentz group.
For classification tasks in particle physics, we demonstrate that such an equivariant architecture leads to drastically simpler models that have relatively few learnable parameters.
arXiv Detail & Related papers (2020-06-08T17:54:43Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.