Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA
- URL: http://arxiv.org/abs/2107.10098v1
- Date: Wed, 21 Jul 2021 14:22:14 GMT
- Title: Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA
- Authors: S\'ebastien Lachapelle, Pau Rodr\'iguez L\'opez, R\'emi Le Priol,
Alexandre Lacoste, Simon Lacoste-Julien
- Abstract summary: Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
- Score: 81.4991350761909
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It can be argued that finding an interpretable low-dimensional representation
of a potentially high-dimensional phenomenon is central to the scientific
enterprise. Independent component analysis (ICA) refers to an ensemble of
methods which formalize this goal and provide estimation procedure for
practical application. This work proposes mechanism sparsity regularization as
a new principle to achieve nonlinear ICA when latent factors depend sparsely on
observed auxiliary variables and/or past latent factors. We show that the
latent variables can be recovered up to a permutation if one regularizes the
latent mechanisms to be sparse and if some graphical criterion is satisfied by
the data generating process. As a special case, our framework shows how one can
leverage unknown-target interventions on the latent factors to disentangle
them, thus drawing further connections between ICA and causality. We validate
our theoretical results with toy experiments.
Related papers
- Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Nonlinearity, Feedback and Uniform Consistency in Causal Structural
Learning [0.8158530638728501]
Causal Discovery aims to find automated search methods for learning causal structures from observational data.
This thesis focuses on two questions in causal discovery: (i) providing an alternative definition of k-Triangle Faithfulness that (i) is weaker than strong faithfulness when applied to the Gaussian family of distributions, and (ii) under the assumption that the modified version of Strong Faithfulness holds.
arXiv Detail & Related papers (2023-08-15T01:23:42Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Score-based Causal Representation Learning with Interventions [54.735484409244386]
This paper studies the causal representation learning problem when latent causal variables are observed indirectly.
The objectives are: (i) recovering the unknown linear transformation (up to scaling) and (ii) determining the directed acyclic graph (DAG) underlying the latent variables.
arXiv Detail & Related papers (2023-01-19T18:39:48Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Causal Discovery in Linear Structural Causal Models with Deterministic
Relations [27.06618125828978]
We focus on the task of causal discovery form observational data.
We derive a set of necessary and sufficient conditions for unique identifiability of the causal structure.
arXiv Detail & Related papers (2021-10-30T21:32:42Z) - Independent mechanism analysis, a new concept? [3.2548794659022393]
Identifiability can be recovered in settings where additional, typically observed variables are included in the generative process.
We provide theoretical and empirical evidence that our approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation.
arXiv Detail & Related papers (2021-06-09T16:45:00Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.