Bispectral Neural Networks
- URL: http://arxiv.org/abs/2209.03416v5
- Date: Fri, 19 May 2023 19:17:35 GMT
- Title: Bispectral Neural Networks
- Authors: Sophia Sanborn, Christian Shewmake, Bruno Olshausen, Christopher
Hillar
- Abstract summary: We present a neural network architecture, Bispectral Neural Networks (BNNs)
BNNs are able to simultaneously learn groups, their irreducible representations, and corresponding equivariant and complete-invariant maps.
- Score: 1.0323063834827415
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a neural network architecture, Bispectral Neural Networks (BNNs)
for learning representations that are invariant to the actions of compact
commutative groups on the space over which a signal is defined. The model
incorporates the ansatz of the bispectrum, an analytically defined group
invariant that is complete -- that is, it preserves all signal structure while
removing only the variation due to group actions. Here, we demonstrate that
BNNs are able to simultaneously learn groups, their irreducible
representations, and corresponding equivariant and complete-invariant maps
purely from the symmetries implicit in data. Further, we demonstrate that the
completeness property endows these networks with strong invariance-based
adversarial robustness. This work establishes Bispectral Neural Networks as a
powerful computational primitive for robust invariant representation learning
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - A Characterization Theorem for Equivariant Networks with Point-wise
Activations [13.00676132572457]
We prove that rotation-equivariant networks can only be invariant, as it happens for any network which is equivariant with respect to connected compact groups.
We show that feature spaces of disentangled steerable convolutional neural networks are trivial representations.
arXiv Detail & Related papers (2024-01-17T14:30:46Z) - Non Commutative Convolutional Signal Models in Neural Networks:
Stability to Small Deformations [111.27636893711055]
We study the filtering and stability properties of non commutative convolutional filters.
Our results have direct implications for group neural networks, multigraph neural networks and quaternion neural networks.
arXiv Detail & Related papers (2023-10-05T20:27:22Z) - Towards Rigorous Understanding of Neural Networks via
Semantics-preserving Transformations [0.0]
We present an approach to the precise and global verification and explanation of Rectifier Neural Networks.
Key to our approach is the symbolic execution of these networks that allows the construction of semantically equivalent Typed Affine Decision Structures.
arXiv Detail & Related papers (2023-01-19T11:35:07Z) - Equivariant and Steerable Neural Networks: A review with special
emphasis on the symmetric group [0.76146285961466]
Convolutional neural networks revolutionized computer vision and natrual language processing.
Their efficiency, as compared to fully connected neural networks, has its origin in the architecture.
We review the architecture of such networks including equivariant layers and filter banks, activation with capsules and group pooling.
arXiv Detail & Related papers (2023-01-08T11:05:31Z) - Equivariant Transduction through Invariant Alignment [71.45263447328374]
We introduce a novel group-equivariant architecture that incorporates a group-in hard alignment mechanism.
We find that our network's structure allows it to develop stronger equivariant properties than existing group-equivariant approaches.
We additionally find that it outperforms previous group-equivariant networks empirically on the SCAN task.
arXiv Detail & Related papers (2022-09-22T11:19:45Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Convolutional Filtering and Neural Networks with Non Commutative
Algebras [153.20329791008095]
We study the generalization of non commutative convolutional neural networks.
We show that non commutative convolutional architectures can be stable to deformations on the space of operators.
arXiv Detail & Related papers (2021-08-23T04:22:58Z) - Stability of Algebraic Neural Networks to Small Perturbations [179.55535781816343]
Algebraic neural networks (AlgNNs) are composed of a cascade of layers each one associated to and algebraic signal model.
We show how any architecture that uses a formal notion of convolution can be stable beyond particular choices of the shift operator.
arXiv Detail & Related papers (2020-10-22T09:10:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.