Equivariant and Steerable Neural Networks: A review with special
emphasis on the symmetric group
- URL: http://arxiv.org/abs/2301.03019v1
- Date: Sun, 8 Jan 2023 11:05:31 GMT
- Title: Equivariant and Steerable Neural Networks: A review with special
emphasis on the symmetric group
- Authors: Patrick Kr\"uger, Hanno Gottschalk
- Abstract summary: Convolutional neural networks revolutionized computer vision and natrual language processing.
Their efficiency, as compared to fully connected neural networks, has its origin in the architecture.
We review the architecture of such networks including equivariant layers and filter banks, activation with capsules and group pooling.
- Score: 0.76146285961466
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Convolutional neural networks revolutionized computer vision and natrual
language processing. Their efficiency, as compared to fully connected neural
networks, has its origin in the architecture, where convolutions reflect the
translation invariance in space and time in pattern or speech recognition
tasks. Recently, Cohen and Welling have put this in the broader perspective of
invariance under symmetry groups, which leads to the concept of group
equivaiant neural networks and more generally steerable neural networks. In
this article, we review the architecture of such networks including equivariant
layers and filter banks, activation with capsules and group pooling. We apply
this formalism to the symmetric group, for which we work out a number of
details on representations and capsules that are not found in the literature.
Related papers
- Closed-Form Interpretation of Neural Network Latent Spaces with Symbolic Gradients [0.0]
We introduce a framework for finding closed-form interpretations of neurons in latent spaces of artificial neural networks.
The interpretation framework is based on embedding trained neural networks into an equivalence class of functions that encode the same concept.
arXiv Detail & Related papers (2024-09-09T03:26:07Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - SO(2) and O(2) Equivariance in Image Recognition with
Bessel-Convolutional Neural Networks [63.24965775030674]
This work presents the development of Bessel-convolutional neural networks (B-CNNs)
B-CNNs exploit a particular decomposition based on Bessel functions to modify the key operation between images and filters.
Study is carried out to assess the performances of B-CNNs compared to other methods.
arXiv Detail & Related papers (2023-04-18T18:06:35Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Bispectral Neural Networks [1.0323063834827415]
We present a neural network architecture, Bispectral Neural Networks (BNNs)
BNNs are able to simultaneously learn groups, their irreducible representations, and corresponding equivariant and complete-invariant maps.
arXiv Detail & Related papers (2022-09-07T18:34:48Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Similarity and Matching of Neural Network Representations [0.0]
We employ a toolset -- dubbed Dr. Frankenstein -- to analyse the similarity of representations in deep neural networks.
We aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer.
arXiv Detail & Related papers (2021-10-27T17:59:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.