Local Constraint-Based Causal Discovery under Selection Bias
- URL: http://arxiv.org/abs/2203.01848v1
- Date: Thu, 3 Mar 2022 16:52:22 GMT
- Title: Local Constraint-Based Causal Discovery under Selection Bias
- Authors: Philip Versteeg, Cheng Zhang and Joris M. Mooij
- Abstract summary: We consider the problem of discovering causal relations from independence constraints selection bias in addition to confounding is present.
We focus instead on local patterns of independence relations, where we find no sound method for only three variable that can include background knowledge.
Y-Structure patterns are shown to be sound in predicting causal relations from data under selection bias, where cycles may be present.
- Score: 8.465604845422238
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of discovering causal relations from independence
constraints selection bias in addition to confounding is present. While the
seminal FCI algorithm is sound and complete in this setup, no criterion for the
causal interpretation of its output under selection bias is presently known. We
focus instead on local patterns of independence relations, where we find no
sound method for only three variable that can include background knowledge.
Y-Structure patterns are shown to be sound in predicting causal relations from
data under selection bias, where cycles may be present. We introduce a
finite-sample scoring rule for Y-Structures that is shown to successfully
predict causal relations in simulation experiments that include selection
mechanisms. On real-world microarray data, we show that a Y-Structure variant
performs well across different datasets, potentially circumventing spurious
correlations due to selection bias.
Related papers
- Detecting and Identifying Selection Structure in Sequential Data [53.24493902162797]
We argue that the selective inclusion of data points based on latent objectives is common in practical situations, such as music sequences.
We show that selection structure is identifiable without any parametric assumptions or interventional experiments.
We also propose a provably correct algorithm to detect and identify selection structures as well as other types of dependencies.
arXiv Detail & Related papers (2024-06-29T20:56:34Z) - Causal Feature Selection via Transfer Entropy [59.999594949050596]
Causal discovery aims to identify causal relationships between features with observational data.
We introduce a new causal feature selection approach that relies on the forward and backward feature selection procedures.
We provide theoretical guarantees on the regression and classification errors for both the exact and the finite-sample cases.
arXiv Detail & Related papers (2023-10-17T08:04:45Z) - Approximating Counterfactual Bounds while Fusing Observational, Biased
and Randomised Data Sources [64.96984404868411]
We address the problem of integrating data from multiple, possibly biased, observational and interventional studies.
We show that the likelihood of the available data has no local maxima.
We then show how the same approach can address the general case of multiple datasets.
arXiv Detail & Related papers (2023-07-31T11:28:24Z) - Self-Compatibility: Evaluating Causal Discovery without Ground Truth [28.72650348646176]
We propose a novel method for falsifying the output of a causal discovery algorithm in the absence of ground truth.
Our key insight is that while statistical learning seeks stability across subsets of data points, causal learning should seek stability across subsets of variables.
We prove that detecting incompatibilities can falsify wrongly inferred causal relations due to violation of assumptions or errors from finite sample effects.
arXiv Detail & Related papers (2023-07-18T18:59:42Z) - Bounding Counterfactuals under Selection Bias [60.55840896782637]
We propose a first algorithm to address both identifiable and unidentifiable queries.
We prove that, in spite of the missingness induced by the selection bias, the likelihood of the available data is unimodal.
arXiv Detail & Related papers (2022-07-26T10:33:10Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Decorrelated Clustering with Data Selection Bias [55.91842043124102]
We propose a novel Decorrelation regularized K-Means algorithm (DCKM) for clustering with data selection bias.
Our DCKM algorithm achieves significant performance gains, indicating the necessity of removing unexpected feature correlations induced by selection bias.
arXiv Detail & Related papers (2020-06-29T08:55:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.