There and back again: Cycle consistency across sets for isolating
factors of variation
- URL: http://arxiv.org/abs/2103.03240v1
- Date: Thu, 4 Mar 2021 18:58:45 GMT
- Title: There and back again: Cycle consistency across sets for isolating
factors of variation
- Authors: Kieran A. Murphy, Varun Jampani, Srikumar Ramalingam, Ameesh Makadia
- Abstract summary: We operate in the setting where limited information is known about the data in the form of groupings.
Our goal is to learn representations which isolate the factors of variation that are common across the groupings.
- Score: 43.59036597872957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Representational learning hinges on the task of unraveling the set of
underlying explanatory factors of variation in data. In this work, we operate
in the setting where limited information is known about the data in the form of
groupings, or set membership, where the underlying factors of variation is
restricted to a subset. Our goal is to learn representations which isolate the
factors of variation that are common across the groupings. Our key insight is
the use of cycle consistency across sets(CCS) between the learned embeddings of
images belonging to different sets. In contrast to other methods utilizing set
supervision, CCS can be applied with significantly fewer constraints on the
factors of variation, across a remarkably broad range of settings, and only
utilizing set membership for some fraction of the training data. By curating
datasets from Shapes3D, we quantify the effectiveness of CCS through mutual
information between the learned representations and the known generative
factors. In addition, we demonstrate the applicability of CCS to the tasks of
digit style isolation and synthetic-to-real object pose transfer and compare to
generative approaches utilizing the same supervision.
Related papers
- Information-Theoretic State Variable Selection for Reinforcement
Learning [4.2050490361120465]
We introduce the Transfer Entropy Redundancy Criterion (TERC), an information-theoretic criterion.
TERC determines if there is textitentropy transferred from state variables to actions during training.
We define an algorithm based on TERC that provably excludes variables from the state that have no effect on the final performance of the agent.
arXiv Detail & Related papers (2024-01-21T14:51:09Z) - Leveraging sparse and shared feature activations for disentangled
representation learning [112.22699167017471]
We propose to leverage knowledge extracted from a diversified set of supervised tasks to learn a common disentangled representation.
We validate our approach on six real world distribution shift benchmarks, and different data modalities.
arXiv Detail & Related papers (2023-04-17T01:33:24Z) - DOT-VAE: Disentangling One Factor at a Time [1.6114012813668934]
We propose a novel framework which augments the latent space of a Variational Autoencoders with a disentangled space and is trained using a Wake-Sleep-inspired two-step algorithm for unsupervised disentanglement.
Our network learns to disentangle interpretable, independent factors from the data one at a time", and encode it in different dimensions of the disentangled latent space, while making no prior assumptions about the number of factors or their joint distribution.
arXiv Detail & Related papers (2022-10-19T22:53:02Z) - Consistency and Diversity induced Human Motion Segmentation [231.36289425663702]
We propose a novel Consistency and Diversity induced human Motion (CDMS) algorithm.
Our model factorizes the source and target data into distinct multi-layer feature spaces.
A multi-mutual learning strategy is carried out to reduce the domain gap between the source and target data.
arXiv Detail & Related papers (2022-02-10T06:23:56Z) - Learning Conditional Invariance through Cycle Consistency [60.85059977904014]
We propose a novel approach to identify meaningful and independent factors of variation in a dataset.
Our method involves two separate latent subspaces for the target property and the remaining input information.
We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models.
arXiv Detail & Related papers (2021-11-25T17:33:12Z) - Group-disentangled Representation Learning with Weakly-Supervised
Regularization [13.311886256230814]
GroupVAE is a simple yet effective Kullback-Leibler divergence-based regularization to enforce consistent and disentangled representations.
We demonstrate that learning group-disentangled representations improve upon downstream tasks, including fair classification and 3D shape-related tasks such as reconstruction, classification, and transfer learning.
arXiv Detail & Related papers (2021-10-23T10:01:05Z) - Rethinking Content and Style: Exploring Bias for Unsupervised
Disentanglement [59.033559925639075]
We propose a formulation for unsupervised C-S disentanglement based on our assumption that different factors are of different importance and popularity for image reconstruction.
The corresponding model inductive bias is introduced by our proposed C-S disentanglement Module (C-S DisMo)
Experiments on several popular datasets demonstrate that our method achieves the state-of-the-art unsupervised C-S disentanglement.
arXiv Detail & Related papers (2021-02-21T08:04:33Z) - Improving filling level classification with adversarial training [90.01594595780928]
We investigate the problem of classifying - from a single image - the level of content in a cup or a drinking glass.
We use adversarial training in a generic source dataset and then refine the training with a task-specific dataset.
We show that transfer learning with adversarial training in the source domain consistently improves the classification accuracy on the test set.
arXiv Detail & Related papers (2021-02-08T08:32:56Z) - Semi-Supervised Disentanglement of Class-Related and Class-Independent
Factors in VAE [4.533408938245526]
We propose a framework capable of disentangling class-related and class-independent factors of variation in data.
Our framework employs an attention mechanism in its latent space in order to improve the process of extracting class-related factors from data.
Experiments show that our framework disentangles class-related and class-independent factors of variation and learns interpretable features.
arXiv Detail & Related papers (2021-02-01T15:05:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.