Learning Identity-Preserving Transformations on Data Manifolds
- URL: http://arxiv.org/abs/2106.12096v2
- Date: Wed, 29 Mar 2023 03:12:54 GMT
- Title: Learning Identity-Preserving Transformations on Data Manifolds
- Authors: Marissa Connor, Kion Fallah, Christopher Rozell
- Abstract summary: Many machine learning techniques incorporate identity-preserving transformations into their models to generalize their performance to previously unseen data.
We develop a learning strategy that does not require transformation labels and develops a method that learns the local regions where each operator is likely to be used.
Experiments on MNIST and Fashion MNIST highlight our model's ability to learn identity-preserving transformations on multi-class datasets.
- Score: 14.31845138586011
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many machine learning techniques incorporate identity-preserving
transformations into their models to generalize their performance to previously
unseen data. These transformations are typically selected from a set of
functions that are known to maintain the identity of an input when applied
(e.g., rotation, translation, flipping, and scaling). However, there are many
natural variations that cannot be labeled for supervision or defined through
examination of the data. As suggested by the manifold hypothesis, many of these
natural variations live on or near a low-dimensional, nonlinear manifold.
Several techniques represent manifold variations through a set of learned Lie
group operators that define directions of motion on the manifold. However,
these approaches are limited because they require transformation labels when
training their models and they lack a method for determining which regions of
the manifold are appropriate for applying each specific operator. We address
these limitations by introducing a learning strategy that does not require
transformation labels and developing a method that learns the local regions
where each operator is likely to be used while preserving the identity of
inputs. Experiments on MNIST and Fashion MNIST highlight our model's ability to
learn identity-preserving transformations on multi-class datasets.
Additionally, we train on CelebA to showcase our model's ability to learn
semantically meaningful transformations on complex datasets in an unsupervised
manner.
Related papers
- Unsupervised Representation Learning from Sparse Transformation Analysis [79.94858534887801]
We propose to learn representations from sequence data by factorizing the transformations of the latent variables into sparse components.
Input data are first encoded as distributions of latent activations and subsequently transformed using a probability flow model.
arXiv Detail & Related papers (2024-10-07T23:53:25Z) - Learning Invariant Molecular Representation in Latent Discrete Space [52.13724532622099]
We propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts.
Our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts.
arXiv Detail & Related papers (2023-10-22T04:06:44Z) - A Generic Self-Supervised Framework of Learning Invariant Discriminative
Features [9.614694312155798]
This paper proposes a generic SSL framework based on a constrained self-labelling assignment process.
The proposed training strategy outperforms a majority of state-of-the-art representation learning methods based on AE structures.
arXiv Detail & Related papers (2022-02-14T18:09:43Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Mitigating Generation Shifts for Generalized Zero-Shot Learning [52.98182124310114]
Generalized Zero-Shot Learning (GZSL) is the task of leveraging semantic information (e.g., attributes) to recognize the seen and unseen samples, where unseen classes are not observable during training.
We propose a novel Generation Shifts Mitigating Flow framework for learning unseen data synthesis efficiently and effectively.
Experimental results demonstrate that GSMFlow achieves state-of-the-art recognition performance in both conventional and generalized zero-shot settings.
arXiv Detail & Related papers (2021-07-07T11:43:59Z) - Rotating spiders and reflecting dogs: a class conditional approach to
learning data augmentation distributions [0.0]
We introduce a method by which we can learn class conditional distributions on augmentation transformations.
We give a number of examples where our methods learn different non-meaningful transformations depending on class.
Our method can be used as a tool to probe the symmetries intrinsic to a potentially complex dataset.
arXiv Detail & Related papers (2021-06-07T23:36:24Z) - Commutative Lie Group VAE for Disentanglement Learning [96.32813624341833]
We view disentanglement learning as discovering an underlying structure that equivariantly reflects the factorized variations shown in data.
A simple model named Commutative Lie Group VAE is introduced to realize the group-based disentanglement learning.
Experiments show that our model can effectively learn disentangled representations without supervision, and can achieve state-of-the-art performance without extra constraints.
arXiv Detail & Related papers (2021-06-07T07:03:14Z) - Learning disentangled representations via product manifold projection [10.677966716893762]
We propose a novel approach to disentangle the generative factors of variation underlying a given set of observations.
Our method builds upon the idea that the (unknown) low-dimensional manifold underlying the data space can be explicitly modeled as a product of submanifolds.
arXiv Detail & Related papers (2021-03-02T10:59:59Z) - Disentangling images with Lie group transformations and sparse coding [3.3454373538792552]
We train a model that learns to disentangle spatial patterns and their continuous transformations in a completely unsupervised manner.
Training the model on a dataset consisting of controlled geometric transformations of specific MNIST digits shows that it can recover these transformations along with the digits.
arXiv Detail & Related papers (2020-12-11T19:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.