Discovery of Causal Additive Models in the Presence of Unobserved
Variables
- URL: http://arxiv.org/abs/2106.02234v1
- Date: Fri, 4 Jun 2021 03:28:27 GMT
- Title: Discovery of Causal Additive Models in the Presence of Unobserved
Variables
- Authors: Takashi Nicholas Maeda, Shohei Shimizu
- Abstract summary: Causal discovery from data affected by unobserved variables is an important but difficult problem to solve.
We propose a method to identify all the causal relationships that are theoretically possible to identify without being biased by unobserved variables.
- Score: 6.670414650224422
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal discovery from data affected by unobserved variables is an important
but difficult problem to solve. The effects that unobserved variables have on
the relationships between observed variables are more complex in nonlinear
cases than in linear cases. In this study, we focus on causal additive models
in the presence of unobserved variables. Causal additive models exhibit
structural equations that are additive in the variables and error terms. We
take into account the presence of not only unobserved common causes but also
unobserved intermediate variables. Our theoretical results show that, when the
causal relationships are nonlinear and there are unobserved variables, it is
not possible to identify all the causal relationships between observed
variables through regression and independence tests. However, our theoretical
results also show that it is possible to avoid incorrect inferences. We propose
a method to identify all the causal relationships that are theoretically
possible to identify without being biased by unobserved variables. The
empirical results using artificial data and simulated functional magnetic
resonance imaging (fMRI) data show that our method effectively infers causal
structures in the presence of unobserved variables.
Related papers
- Causal Discovery in Linear Models with Unobserved Variables and Measurement Error [26.72594853233639]
The presence of unobserved common causes and the presence of measurement error are two of the most limiting challenges in the task of causal structure learning.
We study the problem of causal discovery in systems where these two challenges can be present simultaneously.
arXiv Detail & Related papers (2024-07-28T08:26:56Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonlinearity, Feedback and Uniform Consistency in Causal Structural
Learning [0.8158530638728501]
Causal Discovery aims to find automated search methods for learning causal structures from observational data.
This thesis focuses on two questions in causal discovery: (i) providing an alternative definition of k-Triangle Faithfulness that (i) is weaker than strong faithfulness when applied to the Gaussian family of distributions, and (ii) under the assumption that the modified version of Strong Faithfulness holds.
arXiv Detail & Related papers (2023-08-15T01:23:42Z) - A Causal Framework for Decomposing Spurious Variations [68.12191782657437]
We develop tools for decomposing spurious variations in Markovian and Semi-Markovian models.
We prove the first results that allow a non-parametric decomposition of spurious effects.
The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine.
arXiv Detail & Related papers (2023-06-08T09:40:28Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Independence Testing-Based Approach to Causal Discovery under
Measurement Error and Linear Non-Gaussian Models [9.016435298155827]
Causal discovery aims to recover causal structures generating observational data.
We consider a specific formulation of the problem, where the unobserved target variables follow a linear non-Gaussian acyclic model.
We propose the Transformed Independent Noise condition, which checks for independence between a specific linear transformation of some measured variables and certain other measured variables.
arXiv Detail & Related papers (2022-10-20T05:10:37Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Entropic Causal Inference: Identifiability and Finite Sample Results [14.495984877053948]
Entropic causal inference is a framework for inferring the causal direction between two categorical variables from observational data.
We consider the minimum entropy coupling-based algorithmic approach presented by Kocaoglu et al.
arXiv Detail & Related papers (2021-01-10T08:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.