Compact Matrix Quantum Group Equivariant Neural Networks
- URL: http://arxiv.org/abs/2311.06358v2
- Date: Fri, 23 May 2025 14:30:04 GMT
- Title: Compact Matrix Quantum Group Equivariant Neural Networks
- Authors: Edward Pearce-Crump,
- Abstract summary: Group equivariant neural networks have proven effective in modelling a wide range of tasks where the data lives in a classical geometric space.<n>These networks are not suitable for learning from data that lives in a non-commutative geometry.<n>We derive the existence of a new type of equivariant neural network, called compact matrix quantum group equivariant neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Group equivariant neural networks have proven effective in modelling a wide range of tasks where the data lives in a classical geometric space and exhibits well-defined group symmetries. However, these networks are not suitable for learning from data that lives in a non-commutative geometry, described formally by non-commutative $C^{*}$-algebras, since the $C^{*}$-algebra of continuous functions on a compact matrix group is commutative. To address this limitation, we derive the existence of a new type of equivariant neural network, called compact matrix quantum group equivariant neural networks, which encode symmetries that are described by compact matrix quantum groups. We characterise the weight matrices that appear in these neural networks for the easy compact matrix quantum groups, which are defined by set partitions. As a result, we obtain new characterisations of equivariant weight matrices for some compact matrix groups that have not appeared previously in the machine learning literature.
Related papers
- MatrixNet: Learning over symmetry groups using learned group representations [13.19415425364914]
We propose MatrixNet, a neural network architecture that learns matrix representations of group element inputs instead of using predefined representations.
We show that MatrixNet respects group relations allowing generalization of group elements of greater word length than in the training set.
arXiv Detail & Related papers (2025-01-16T14:45:12Z) - A Diagrammatic Approach to Improve Computational Efficiency in Group Equivariant Neural Networks [1.9643748953805935]
Group equivariant neural networks are growing in importance owing to their ability to generalise well in applications where the data has known underlying symmetries.<n>Recent characterisations of a class of these networks that use high-order tensor power spaces as their layers suggest that they have significant potential.<n>We present a fast matrix multiplication algorithm for any equivariant weight matrix that maps between tensor power layer spaces in these networks for four groups.
arXiv Detail & Related papers (2024-12-14T14:08:06Z) - Learning Symmetries via Weight-Sharing with Doubly Stochastic Tensors [46.59269589647962]
Group equivariance has emerged as a valuable inductive bias in deep learning.<n>Group equivariant methods require the groups of interest to be known beforehand.<n>We show that when the dataset exhibits strong symmetries, the permutation matrices will converge to regular group representations.
arXiv Detail & Related papers (2024-12-05T20:15:34Z) - Symmetry-Based Structured Matrices for Efficient Approximately Equivariant Networks [5.187307904567701]
Group Matrices (GMs) are forgotten precursor to modern notion of regular representations of finite groups.<n>We show GMs can generalize classical LDR theory to general discrete groups.<n>Our framework performs competitively with approximately equivariant NNs and other structured matrix-based methods.
arXiv Detail & Related papers (2024-09-18T07:52:33Z) - Monomial Matrix Group Equivariant Neural Functional Networks [1.797555376258229]
We extend the study of the group action on the network weights by incorporating scaling/sign-flipping symmetries.
We name our new family of NFNs the Monomial Matrix Group Equivariant Neural Functional Networks (Monomial-NFN)
arXiv Detail & Related papers (2024-09-18T04:36:05Z) - A quantum neural network framework for scalable quantum circuit approximation of unitary matrices [0.0]
We develop a quantum neural network framework for quantum circuit approximation of multi-qubit unitary gates.
Layers of the neural networks are defined by product of certain elements of the Standard Recursive Block Basis.
arXiv Detail & Related papers (2024-02-07T22:39:39Z) - Order-invariant two-photon quantum correlations in PT-symmetric
interferometers [62.997667081978825]
Multiphoton correlations in linear photonic quantum networks are governed by matrix permanents.
We show that the overall multiphoton behavior of a network from its individual building blocks typically defies intuition.
Our results underline new ways in which quantum correlations may be preserved in counterintuitive ways even in small-scale non-Hermitian networks.
arXiv Detail & Related papers (2023-02-23T09:43:49Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - VC dimensions of group convolutional neural networks [0.0]
We study the generalization capacity of group convolutional neural networks.
We identify precise estimates for the VC dimensions of simple sets of group convolutional neural networks.
arXiv Detail & Related papers (2022-12-19T14:43:22Z) - Connecting Permutation Equivariant Neural Networks and Partition Diagrams [0.0]
We show that all of the weight matrices that appear in Permutation equivariant neural networks can be obtained from Schur-Weyl duality.
In particular, we adapt Schur-Weyl duality to derive a simple, diagrammatic method for calculating the weight matrices themselves.
arXiv Detail & Related papers (2022-12-16T18:48:54Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Theory for Equivariant Quantum Neural Networks [0.0]
We present a theoretical framework to design equivariant quantum neural networks (EQNNs) for essentially any relevant symmetry group.
Our framework can be readily applied to virtually all areas of quantum machine learning.
arXiv Detail & Related papers (2022-10-16T15:42:21Z) - Convolutional Filtering and Neural Networks with Non Commutative
Algebras [153.20329791008095]
We study the generalization of non commutative convolutional neural networks.
We show that non commutative convolutional architectures can be stable to deformations on the space of operators.
arXiv Detail & Related papers (2021-08-23T04:22:58Z) - A Practical Method for Constructing Equivariant Multilayer Perceptrons
for Arbitrary Matrix Groups [115.58550697886987]
We provide a completely general algorithm for solving for the equivariant layers of matrix groups.
In addition to recovering solutions from other works as special cases, we construct multilayer perceptrons equivariant to multiple groups that have never been tackled before.
Our approach outperforms non-equivariant baselines, with applications to particle physics and dynamical systems.
arXiv Detail & Related papers (2021-04-19T17:21:54Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Lorentz Group Equivariant Neural Network for Particle Physics [58.56031187968692]
We present a neural network architecture that is fully equivariant with respect to transformations under the Lorentz group.
For classification tasks in particle physics, we demonstrate that such an equivariant architecture leads to drastically simpler models that have relatively few learnable parameters.
arXiv Detail & Related papers (2020-06-08T17:54:43Z) - Entanglement Classification via Neural Network Quantum States [58.720142291102135]
In this paper we combine machine-learning tools and the theory of quantum entanglement to perform entanglement classification for multipartite qubit systems in pure states.
We use a parameterisation of quantum systems using artificial neural networks in a restricted Boltzmann machine (RBM) architecture, known as Neural Network Quantum States (NNS)
arXiv Detail & Related papers (2019-12-31T07:40:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.