In Search of Projectively Equivariant Networks
- URL: http://arxiv.org/abs/2209.14719v3
- Date: Wed, 20 Dec 2023 16:08:32 GMT
- Title: In Search of Projectively Equivariant Networks
- Authors: Georg B\"okman, Axel Flinth, Fredrik Kahl
- Abstract summary: We propose a way to construct a projectively equivariant neural network.
We show that our approach is the most general possible when building a network out of linear layers.
- Score: 14.275801110186885
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Equivariance of linear neural network layers is well studied. In this work,
we relax the equivariance condition to only be true in a projective sense. We
propose a way to construct a projectively equivariant neural network through
building a standard equivariant network where the linear group representations
acting on each intermediate feature space are "multiplicatively modified lifts"
of projective group representations. By theoretically studying the relation of
projectively and linearly equivariant linear layers, we show that our approach
is the most general possible when building a network out of linear layers. The
theory is showcased in two simple experiments.
Related papers
- Equivariant neural networks and piecewise linear representation theory [0.0]
Equivariant neural networks are neural networks with symmetry.
Motivated by the theory of group representations, we decompose the layers of an equivariant neural network into simple representations.
arXiv Detail & Related papers (2024-08-01T23:08:37Z) - Investigating how ReLU-networks encode symmetries [13.935148870831396]
We investigate whether equivariance of a network implies that all layers are equivariant.
We conjecture that CNNs trained to be equivariant will exhibit layerwise equivariance.
We show that it is typically easier to merge a network with a group-transformed version of itself than merging two different networks.
arXiv Detail & Related papers (2023-05-26T15:23:20Z) - Self-Supervised Learning for Group Equivariant Neural Networks [75.62232699377877]
Group equivariant neural networks are the models whose structure is restricted to commute with the transformations on the input.
We propose two concepts for self-supervised tasks: equivariant pretext labels and invariant contrastive loss.
Experiments on standard image recognition benchmarks demonstrate that the equivariant neural networks exploit the proposed self-supervised tasks.
arXiv Detail & Related papers (2023-03-08T08:11:26Z) - Exploring Linear Feature Disentanglement For Neural Networks [63.20827189693117]
Non-linear activation functions, e.g., Sigmoid, ReLU, and Tanh, have achieved great success in neural networks (NNs)
Due to the complex non-linear characteristic of samples, the objective of those activation functions is to project samples from their original feature space to a linear separable feature space.
This phenomenon ignites our interest in exploring whether all features need to be transformed by all non-linear functions in current typical NNs.
arXiv Detail & Related papers (2022-03-22T13:09:17Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - A Unifying View on Implicit Bias in Training Linear Neural Networks [31.65006970108761]
We study the implicit bias of gradient flow (i.e., gradient descent with infinitesimal step size) on linear neural network training.
We propose a tensor formulation of neural networks that includes fully-connected, diagonal, and convolutional networks as special cases.
arXiv Detail & Related papers (2020-10-06T06:08:35Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.