Group-invariant tensor train networks for supervised learning
- URL: http://arxiv.org/abs/2206.15051v2
- Date: Wed, 27 Sep 2023 16:07:18 GMT
- Title: Group-invariant tensor train networks for supervised learning
- Authors: Brent Sprangers and Nick Vannieuwenhoven
- Abstract summary: We introduce a new numerical algorithm to construct a basis of tensors that are invariant under the action of normal matrix representations.
The group-invariant tensors are then combined into a group-invariant tensor train network, which can be used as a supervised machine learning model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Invariance has recently proven to be a powerful inductive bias in machine
learning models. One such class of predictive or generative models are tensor
networks. We introduce a new numerical algorithm to construct a basis of
tensors that are invariant under the action of normal matrix representations of
an arbitrary discrete group. This method can be up to several orders of
magnitude faster than previous approaches. The group-invariant tensors are then
combined into a group-invariant tensor train network, which can be used as a
supervised machine learning model. We applied this model to a protein binding
classification problem, taking into account problem-specific invariances, and
obtained prediction accuracy in line with state-of-the-art deep learning
approaches.
Related papers
- Learning equivariant tensor functions with applications to sparse vector recovery [5.557442038265024]
We focus on equivariant functions with respect to the diagonal action of the Lorentz and symplectic groups.
Our goal behind these characterizations is to define equivariant machine learning models.
arXiv Detail & Related papers (2024-06-03T17:32:43Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - A Nested Matrix-Tensor Model for Noisy Multi-view Clustering [5.132856740094742]
We propose a nested matrix-tensor model which extends the spiked rank-one tensor model of order three.
We show that our theoretical results allow us to anticipate the exact accuracy of the proposed clustering approach.
Our analysis unveils unexpected and non-trivial phase transition phenomena depending on the model parameters.
arXiv Detail & Related papers (2023-05-31T16:13:46Z) - Self-Supervised Learning for Group Equivariant Neural Networks [75.62232699377877]
Group equivariant neural networks are the models whose structure is restricted to commute with the transformations on the input.
We propose two concepts for self-supervised tasks: equivariant pretext labels and invariant contrastive loss.
Experiments on standard image recognition benchmarks demonstrate that the equivariant neural networks exploit the proposed self-supervised tasks.
arXiv Detail & Related papers (2023-03-08T08:11:26Z) - Learning Invariant Weights in Neural Networks [16.127299898156203]
Many commonly used models in machine learning are constraint to respect certain symmetries in the data.
We propose a weight-space equivalent to this approach, by minimizing a lower bound on the marginal likelihood to learn invariances in neural networks.
arXiv Detail & Related papers (2022-02-25T00:17:09Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Tensor-Train Networks for Learning Predictive Modeling of
Multidimensional Data [0.0]
A promising strategy is based on tensor networks, which have been very successful in physical and chemical applications.
We show that the weights of a multidimensional regression model can be learned by means of tensor networks with the aim of performing a powerful compact representation.
An algorithm based on alternating least squares has been proposed for approximating the weights in TT-format with a reduction of computational power.
arXiv Detail & Related papers (2021-01-22T16:14:38Z) - Learning Invariances in Neural Networks [51.20867785006147]
We show how to parameterize a distribution over augmentations and optimize the training loss simultaneously with respect to the network parameters and augmentation parameters.
We can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations.
arXiv Detail & Related papers (2020-10-22T17:18:48Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.