Disentanglement of Latent Representations via Causal Interventions
- URL: http://arxiv.org/abs/2302.00869v3
- Date: Sat, 23 Sep 2023 03:35:26 GMT
- Title: Disentanglement of Latent Representations via Causal Interventions
- Authors: Ga\"el Gendron, Michael Witbrock and Gillian Dobbie
- Abstract summary: We introduce a new method for disentanglement inspired by causal dynamics.
Our model considers the quantized vectors as causal variables and links them in a causal graph.
It performs causal interventions on the graph and generates atomic transitions affecting a unique factor of variation in the image.
- Score: 11.238098505498165
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The process of generating data such as images is controlled by independent
and unknown factors of variation. The retrieval of these variables has been
studied extensively in the disentanglement, causal representation learning, and
independent component analysis fields. Recently, approaches merging these
domains together have shown great success. Instead of directly representing the
factors of variation, the problem of disentanglement can be seen as finding the
interventions on one image that yield a change to a single factor. Following
this assumption, we introduce a new method for disentanglement inspired by
causal dynamics that combines causality theory with vector-quantized
variational autoencoders. Our model considers the quantized vectors as causal
variables and links them in a causal graph. It performs causal interventions on
the graph and generates atomic transitions affecting a unique factor of
variation in the image. We also introduce a new task of action retrieval that
consists of finding the action responsible for the transition between two
images. We test our method on standard synthetic and real-world disentanglement
datasets. We show that it can effectively disentangle the factors of variation
and perform precise interventions on high-level semantic attributes of an image
without affecting its quality, even with imbalanced data distributions.
Related papers
- Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Counterfactual Fairness with Disentangled Causal Effect Variational
Autoencoder [26.630680698825632]
This paper proposes Disentangled Causal Effect Variational Autoencoder (DCEVAE) to solve the problem of fair classification.
We show that our method estimates the total effect and the counterfactual effect without a complete causal graph.
arXiv Detail & Related papers (2020-11-24T03:43:59Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z) - Learning to Manipulate Individual Objects in an Image [71.55005356240761]
We describe a method to train a generative model with latent factors that are independent and localized.
This means that perturbing the latent variables affects only local regions of the synthesized image, corresponding to objects.
Unlike other unsupervised generative models, ours enables object-centric manipulation, without requiring object-level annotations.
arXiv Detail & Related papers (2020-04-11T21:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.