Learning latent causal graphs via mixture oracles
- URL: http://arxiv.org/abs/2106.15563v1
- Date: Tue, 29 Jun 2021 16:53:34 GMT
- Title: Learning latent causal graphs via mixture oracles
- Authors: Bohdan Kivva, Goutham Rajendran, Pradeep Ravikumar and Bryon Aragam
- Abstract summary: We study the problem of reconstructing a causal graphical model from data in the presence of latent variables.
The main problem of interest is recovering the causal structure over the latent variables while allowing for general, potentially nonlinear dependence between the variables.
- Score: 40.71943453524747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of reconstructing a causal graphical model from data in
the presence of latent variables. The main problem of interest is recovering
the causal structure over the latent variables while allowing for general,
potentially nonlinear dependence between the variables. In many practical
problems, the dependence between raw observations (e.g. pixels in an image) is
much less relevant than the dependence between certain high-level, latent
features (e.g. concepts or objects), and this is the setting of interest. We
provide conditions under which both the latent representations and the
underlying latent causal model are identifiable by a reduction to a mixture
oracle. The proof is constructive, and leads to several algorithms for
explicitly reconstructing the full graphical model. We discuss efficient
algorithms and provide experiments illustrating the algorithms in practice.
Related papers
- Linear causal disentanglement via higher-order cumulants [0.0]
We study the identifiability of linear causal disentanglement, assuming access to data under multiple contexts.
We show that one perfect intervention on each latent variable is sufficient and in the worst case necessary to recover parameters under perfect interventions.
arXiv Detail & Related papers (2024-07-05T15:53:16Z) - Causal Representation Learning from Multiple Distributions: A General Setting [21.73088044465267]
This paper is concerned with a general, completely nonparametric setting of causal representation learning from multiple distributions.
We show that under the sparsity constraint on the recovered graph over the latent variables and suitable sufficient change conditions on the causal influences, one can recover the moralized graph of the underlying directed acyclic graph.
In some cases, most latent variables can even be recovered up to component-wise transformations.
arXiv Detail & Related papers (2024-02-07T17:51:38Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Identifiability Guarantees for Causal Disentanglement from Soft
Interventions [26.435199501882806]
Causal disentanglement aims to uncover a representation of data using latent variables that are interrelated through a causal model.
In this paper, we focus on the scenario where unpaired observational and interventional data are available, with each intervention changing the mechanism of a latent variable.
When the causal variables are fully observed, statistically consistent algorithms have been developed to identify the causal model under faithfulness assumptions.
arXiv Detail & Related papers (2023-07-12T15:39:39Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Learning Latent Structural Causal Models [31.686049664958457]
In machine learning tasks, one often operates on low-level data like image pixels or high-dimensional vectors.
We present a tractable approximate inference method which performs joint inference over the causal variables, structure and parameters of the latent Structural Causal Model.
arXiv Detail & Related papers (2022-10-24T20:09:44Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.