Causal Discovery in Linear Structural Causal Models with Deterministic
Relations
- URL: http://arxiv.org/abs/2111.00341v1
- Date: Sat, 30 Oct 2021 21:32:42 GMT
- Title: Causal Discovery in Linear Structural Causal Models with Deterministic
Relations
- Authors: Yuqin Yang, Mohamed Nafea, AmirEmad Ghassami, Negar Kiyavash
- Abstract summary: We focus on the task of causal discovery form observational data.
We derive a set of necessary and sufficient conditions for unique identifiability of the causal structure.
- Score: 27.06618125828978
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Linear structural causal models (SCMs) -- in which each observed variable is
generated by a subset of the other observed variables as well as a subset of
the exogenous sources -- are pervasive in causal inference and casual
discovery. However, for the task of causal discovery, existing work almost
exclusively focus on the submodel where each observed variable is associated
with a distinct source with non-zero variance. This results in the restriction
that no observed variable can deterministically depend on other observed
variables or latent confounders. In this paper, we extend the results on
structure learning by focusing on a subclass of linear SCMs which do not have
this property, i.e., models in which observed variables can be causally
affected by any subset of the sources, and are allowed to be a deterministic
function of other observed variables or latent confounders. This allows for a
more realistic modeling of influence or information propagation in systems. We
focus on the task of causal discovery form observational data generated from a
member of this subclass. We derive a set of necessary and sufficient conditions
for unique identifiability of the causal structure. To the best of our
knowledge, this is the first work that gives identifiability results for causal
discovery under both latent confounding and deterministic relationships.
Further, we propose an algorithm for recovering the underlying causal structure
when the aforementioned conditions are satisfied. We validate our theoretical
results both on synthetic and real datasets.
Related papers
- A Versatile Causal Discovery Framework to Allow Causally-Related Hidden
Variables [28.51579090194802]
We introduce a novel framework for causal discovery that accommodates the presence of causally-related hidden variables almost everywhere in the causal network.
We develop a Rank-based Latent Causal Discovery algorithm, RLCD, that can efficiently locate hidden variables, determine their cardinalities, and discover the entire causal structure over both measured and hidden ones.
Experimental results on both synthetic and real-world personality data sets demonstrate the efficacy of the proposed approach in finite-sample cases.
arXiv Detail & Related papers (2023-12-18T07:57:39Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonlinearity, Feedback and Uniform Consistency in Causal Structural
Learning [0.8158530638728501]
Causal Discovery aims to find automated search methods for learning causal structures from observational data.
This thesis focuses on two questions in causal discovery: (i) providing an alternative definition of k-Triangle Faithfulness that (i) is weaker than strong faithfulness when applied to the Gaussian family of distributions, and (ii) under the assumption that the modified version of Strong Faithfulness holds.
arXiv Detail & Related papers (2023-08-15T01:23:42Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Causal Discovery in Linear Latent Variable Models Subject to Measurement
Error [29.78435955758185]
We focus on causal discovery in the presence of measurement error in linear systems.
We demonstrate a surprising connection between this problem and causal discovery in the presence of unobserved parentless causes.
arXiv Detail & Related papers (2022-11-08T03:43:14Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.