Commutative Lie Group VAE for Disentanglement Learning
- URL: http://arxiv.org/abs/2106.03375v1
- Date: Mon, 7 Jun 2021 07:03:14 GMT
- Title: Commutative Lie Group VAE for Disentanglement Learning
- Authors: Xinqi Zhu, Chang Xu, Dacheng Tao
- Abstract summary: We view disentanglement learning as discovering an underlying structure that equivariantly reflects the factorized variations shown in data.
A simple model named Commutative Lie Group VAE is introduced to realize the group-based disentanglement learning.
Experiments show that our model can effectively learn disentangled representations without supervision, and can achieve state-of-the-art performance without extra constraints.
- Score: 96.32813624341833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We view disentanglement learning as discovering an underlying structure that
equivariantly reflects the factorized variations shown in data. Traditionally,
such a structure is fixed to be a vector space with data variations represented
by translations along individual latent dimensions. We argue this simple
structure is suboptimal since it requires the model to learn to discard the
properties (e.g. different scales of changes, different levels of abstractness)
of data variations, which is an extra work than equivariance learning. Instead,
we propose to encode the data variations with groups, a structure not only can
equivariantly represent variations, but can also be adaptively optimized to
preserve the properties of data variations. Considering it is hard to conduct
training on group structures, we focus on Lie groups and adopt a
parameterization using Lie algebra. Based on the parameterization, some
disentanglement learning constraints are naturally derived. A simple model
named Commutative Lie Group VAE is introduced to realize the group-based
disentanglement learning. Experiments show that our model can effectively learn
disentangled representations without supervision, and can achieve
state-of-the-art performance without extra constraints.
Related papers
- Lie Group Decompositions for Equivariant Neural Networks [12.139222986297261]
We show how convolution kernels can be parametrized to build models equivariant with respect to affine transformations.
We evaluate the robustness and out-of-distribution generalisation capability of our model on the benchmark affine-invariant classification task.
arXiv Detail & Related papers (2023-10-17T16:04:33Z) - Flow Factorized Representation Learning [109.51947536586677]
We introduce a generative model which specifies a distinct set of latent probability paths that define different input transformations.
We show that our model achieves higher likelihoods on standard representation learning benchmarks while simultaneously being closer to approximately equivariant models.
arXiv Detail & Related papers (2023-09-22T20:15:37Z) - Equivariant Disentangled Transformation for Domain Generalization under
Combination Shift [91.38796390449504]
Combinations of domains and labels are not observed during training but appear in the test environment.
We provide a unique formulation of the combination shift problem based on the concepts of homomorphism, equivariance, and a refined definition of disentanglement.
arXiv Detail & Related papers (2022-08-03T12:31:31Z) - Unsupervised Learning of Group Invariant and Equivariant Representations [10.252723257176566]
We extend group invariant and equivariant representation learning to the field of unsupervised deep learning.
We propose a general learning strategy based on an encoder-decoder framework in which the latent representation is separated in an invariant term and an equivariant group action component.
The key idea is that the network learns to encode and decode data to and from a group-invariant representation by additionally learning to predict the appropriate group action to align input and output pose to solve the reconstruction task.
arXiv Detail & Related papers (2022-02-15T16:44:21Z) - GroupifyVAE: from Group-based Definition to VAE-based Unsupervised
Representation Disentanglement [91.9003001845855]
VAE-based unsupervised disentanglement can not be achieved without introducing other inductive bias.
We address VAE-based unsupervised disentanglement by leveraging the constraints derived from the Group Theory based definition as the non-probabilistic inductive bias.
We train 1800 models covering the most prominent VAE-based models on five datasets to verify the effectiveness of our method.
arXiv Detail & Related papers (2021-02-20T09:49:51Z) - Group Equivariant Conditional Neural Processes [30.134634059773703]
We present the group equivariant conditional neural process (EquivCNP)
We show that EquivCNP achieves comparable performance to conventional conditional neural processes in a 1D regression task.
arXiv Detail & Related papers (2021-02-17T13:50:07Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Learning Invariances in Neural Networks [51.20867785006147]
We show how to parameterize a distribution over augmentations and optimize the training loss simultaneously with respect to the network parameters and augmentation parameters.
We can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations.
arXiv Detail & Related papers (2020-10-22T17:18:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.