On the Identifiability of Quantized Factors
- URL: http://arxiv.org/abs/2306.16334v3
- Date: Tue, 12 Mar 2024 20:04:04 GMT
- Title: On the Identifiability of Quantized Factors
- Authors: Vit\'oria Barin-Pacela, Kartik Ahuja, Simon Lacoste-Julien, Pascal
Vincent
- Abstract summary: We show that it is possible to recover quantized latent factors under a generic nonlinear diffeomorphism.
We introduce this novel form of identifiability, termed quantized factor identifiability, and provide a comprehensive proof of the recovery of the quantized factors.
- Score: 33.12356885773274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Disentanglement aims to recover meaningful latent ground-truth factors from
the observed distribution solely, and is formalized through the theory of
identifiability. The identifiability of independent latent factors is proven to
be impossible in the unsupervised i.i.d. setting under a general nonlinear map
from factors to observations. In this work, however, we demonstrate that it is
possible to recover quantized latent factors under a generic nonlinear
diffeomorphism. We only assume that the latent factors have independent
discontinuities in their density, without requiring the factors to be
statistically independent. We introduce this novel form of identifiability,
termed quantized factor identifiability, and provide a comprehensive proof of
the recovery of the quantized factors.
Related papers
- Identifiability Guarantees for Causal Disentanglement from Purely Observational Data [10.482728002416348]
Causal disentanglement aims to learn about latent causal factors behind data.
Recent advances establish identifiability results assuming that interventions on (single) latent factors are available.
We provide a precise characterization of latent factors that can be identified in nonlinear causal models.
arXiv Detail & Related papers (2024-10-31T04:18:29Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Generalizing Nonlinear ICA Beyond Structural Sparsity [15.450470872782082]
identifiability of nonlinear ICA is known to be impossible without additional assumptions.
Recent advances have proposed conditions on the connective structure from sources to observed variables, known as Structural Sparsity.
We show that even in cases with flexible grouping structures, appropriate identifiability results can be established.
arXiv Detail & Related papers (2023-11-01T21:36:15Z) - C-Disentanglement: Discovering Causally-Independent Generative Factors
under an Inductive Bias of Confounder [35.09708249850816]
We introduce a framework entitled Confounded-Disentanglement (C-Disentanglement), the first framework that explicitly introduces the inductive bias of confounder.
We conduct extensive experiments on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-10-26T11:44:42Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Interventional Causal Representation Learning [75.18055152115586]
Causal representation learning seeks to extract high-level latent factors from low-level sensory data.
Can interventional data facilitate causal representation learning?
We show that interventional data often carries geometric signatures of the latent factors' support.
arXiv Detail & Related papers (2022-09-24T04:59:03Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Independent mechanism analysis, a new concept? [3.2548794659022393]
Identifiability can be recovered in settings where additional, typically observed variables are included in the generative process.
We provide theoretical and empirical evidence that our approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation.
arXiv Detail & Related papers (2021-06-09T16:45:00Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.