Equivariant neural networks and piecewise linear representation theory
- URL: http://arxiv.org/abs/2408.00949v1
- Date: Thu, 1 Aug 2024 23:08:37 GMT
- Title: Equivariant neural networks and piecewise linear representation theory
- Authors: Joel Gibson, Daniel Tubbenhauer, Geordie Williamson,
- Abstract summary: Equivariant neural networks are neural networks with symmetry.
Motivated by the theory of group representations, we decompose the layers of an equivariant neural network into simple representations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Equivariant neural networks are neural networks with symmetry. Motivated by the theory of group representations, we decompose the layers of an equivariant neural network into simple representations. The nonlinear activation functions lead to interesting nonlinear equivariant maps between simple representations. For example, the rectified linear unit (ReLU) gives rise to piecewise linear maps. We show that these considerations lead to a filtration of equivariant neural networks, generalizing Fourier series. This observation might provide a useful tool for interpreting equivariant neural networks.
Related papers
- Recurrent Neural Networks Learn to Store and Generate Sequences using Non-Linear Representations [54.17275171325324]
We present a counterexample to the Linear Representation Hypothesis (LRH)
When trained to repeat an input token sequence, neural networks learn to represent the token at each position with a particular order of magnitude, rather than a direction.
These findings strongly indicate that interpretability research should not be confined to the LRH.
arXiv Detail & Related papers (2024-08-20T15:04:37Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - On Representing Electronic Wave Functions with Sign Equivariant Neural
Networks [10.80375466357108]
Recent neural networks demonstrated impressively accurate approximations of electronic ground-state wave functions.
These neural networks typically consist of a permutation-equivariant neural network followed by a permutation-antisymmetric operation.
While accurate, such neural networks are computationally expensive.
arXiv Detail & Related papers (2024-03-08T12:13:11Z) - A Characterization Theorem for Equivariant Networks with Point-wise
Activations [13.00676132572457]
We prove that rotation-equivariant networks can only be invariant, as it happens for any network which is equivariant with respect to connected compact groups.
We show that feature spaces of disentangled steerable convolutional neural networks are trivial representations.
arXiv Detail & Related papers (2024-01-17T14:30:46Z) - On the hardness of learning under symmetries [31.961154082757798]
We study the problem of learning equivariant neural networks via gradient descent.
In spite of the inductive bias via symmetry, actually learning the complete classes of functions represented by equivariant neural networks via gradient descent remains hard.
arXiv Detail & Related papers (2024-01-03T18:24:18Z) - Non Commutative Convolutional Signal Models in Neural Networks:
Stability to Small Deformations [111.27636893711055]
We study the filtering and stability properties of non commutative convolutional filters.
Our results have direct implications for group neural networks, multigraph neural networks and quaternion neural networks.
arXiv Detail & Related papers (2023-10-05T20:27:22Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - In Search of Projectively Equivariant Networks [14.275801110186885]
We propose a way to construct a projectively equivariant neural network.
We show that our approach is the most general possible when building a network out of linear layers.
arXiv Detail & Related papers (2022-09-29T12:26:18Z) - Convolutional Filtering and Neural Networks with Non Commutative
Algebras [153.20329791008095]
We study the generalization of non commutative convolutional neural networks.
We show that non commutative convolutional architectures can be stable to deformations on the space of operators.
arXiv Detail & Related papers (2021-08-23T04:22:58Z) - Lattice gauge equivariant convolutional neural networks [0.0]
We propose Lattice gauge equivariant Convolutional Neural Networks (L-CNNs) for generic machine learning applications.
We show that L-CNNs can learn and generalize gauge invariant quantities that traditional convolutional neural networks are incapable of finding.
arXiv Detail & Related papers (2020-12-23T19:00:01Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.