Disentangling Action Sequences: Discovering Correlated Samples
- URL: http://arxiv.org/abs/2010.11684v1
- Date: Sat, 17 Oct 2020 07:37:50 GMT
- Title: Disentangling Action Sequences: Discovering Correlated Samples
- Authors: Jiantao Wu and Lin Wang
- Abstract summary: We demonstrate the data itself plays a crucial role in disentanglement and instead of the factors, and the disentangled representations align the latent variables with the action sequences.
We propose a novel framework, fractional variational autoencoder (FVAE) to disentangle the action sequences with different significance step-by-step.
Experimental results on dSprites and 3D Chairs show that FVAE improves the stability of disentanglement.
- Score: 6.179793031975444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Disentanglement is a highly desirable property of representation due to its
similarity with human's understanding and reasoning. This improves
interpretability, enables the performance of down-stream tasks, and enables
controllable generative models. However, this domain is challenged by the
abstract notion and incomplete theories to support unsupervised disentanglement
learning. We demonstrate the data itself, such as the orientation of images,
plays a crucial role in disentanglement and instead of the factors, and the
disentangled representations align the latent variables with the action
sequences. We further introduce the concept of disentangling action sequences
which facilitates the description of the behaviours of the existing
disentangling approaches. An analogy for this process is to discover the
commonality between the things and categorizing them. Furthermore, we analyze
the inductive biases on the data and find that the latent information
thresholds are correlated with the significance of the actions. For the
supervised and unsupervised settings, we respectively introduce two methods to
measure the thresholds. We further propose a novel framework, fractional
variational autoencoder (FVAE), to disentangle the action sequences with
different significance step-by-step. Experimental results on dSprites and 3D
Chairs show that FVAE improves the stability of disentanglement.
Related papers
- Learning Action-based Representations Using Invariance [18.1941237781348]
We introduce action-bisimulation encoding, which learns a multi-step controllability metric that discounts distant state features that are relevant for control.
We demonstrate that action-bisimulation pretraining on reward-free, uniformly random data improves sample efficiency in several environments.
arXiv Detail & Related papers (2024-03-25T02:17:54Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z) - Contrastively Disentangled Sequential Variational Autoencoder [20.75922928324671]
We propose a novel sequence representation learning method, named Contrastively Disentangled Sequential Variational Autoencoder (C-DSVAE)
We use a novel evidence lower bound which maximizes the mutual information between the input and the latent factors, while penalizes the mutual information between the static and dynamic factors.
Our experiments show that C-DSVAE significantly outperforms the previous state-of-the-art methods on multiple metrics.
arXiv Detail & Related papers (2021-10-22T23:00:32Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z) - Estimating the Effects of Continuous-valued Interventions using
Generative Adversarial Networks [103.14809802212535]
We build on the generative adversarial networks (GANs) framework to address the problem of estimating the effect of continuous-valued interventions.
Our model, SCIGAN, is flexible and capable of simultaneously estimating counterfactual outcomes for several different continuous interventions.
To address the challenges presented by shifting to continuous interventions, we propose a novel architecture for our discriminator.
arXiv Detail & Related papers (2020-02-27T18:46:21Z) - NestedVAE: Isolating Common Factors via Weak Supervision [45.366986365879505]
We identify the connection between the task of bias reduction and that of isolating factors common between domains.
To isolate the common factors we combine the theory of deep latent variable models with information bottleneck theory.
Two outer VAEs with shared weights attempt to reconstruct the input and infer a latent space, whilst a nested VAE attempts to reconstruct the latent representation of one image, from the latent representation of its paired image.
arXiv Detail & Related papers (2020-02-26T15:49:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.