A Weaker Faithfulness Assumption based on Triple Interactions
- URL: http://arxiv.org/abs/2010.14265v2
- Date: Wed, 4 Aug 2021 08:34:28 GMT
- Title: A Weaker Faithfulness Assumption based on Triple Interactions
- Authors: Alexander Marx, Arthur Gretton, Joris M. Mooij
- Abstract summary: We propose a weaker assumption that we call $2$-adjacency faithfulness.
We propose a sound orientation rule for causal discovery that applies under weaker assumptions.
- Score: 89.59955143854556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the core assumptions in causal discovery is the faithfulness
assumption, i.e., assuming that independencies found in the data are due to
separations in the true causal graph. This assumption can, however, be violated
in many ways, including xor connections, deterministic functions or cancelling
paths. In this work, we propose a weaker assumption that we call $2$-adjacency
faithfulness. In contrast to adjacency faithfulness, which assumes that there
is no conditional independence between each pair of variables that are
connected in the causal graph, we only require no conditional independence
between a node and a subset of its Markov blanket that can contain up to two
nodes. Equivalently, we adapt orientation faithfulness to this setting. We
further propose a sound orientation rule for causal discovery that applies
under weaker assumptions. As a proof of concept, we derive a modified Grow and
Shrink algorithm that recovers the Markov blanket of a target node and prove
its correctness under strictly weaker assumptions than the standard
faithfulness assumption.
Related papers
- Identifying General Mechanism Shifts in Linear Causal Representations [58.6238439611389]
We consider the linear causal representation learning setting where we observe a linear mixing of $d$ unknown latent factors.
Recent work has shown that it is possible to recover the latent factors as well as the underlying structural causal model over them.
We provide a surprising identifiability result that it is indeed possible, under some very mild standard assumptions, to identify the set of shifted nodes.
arXiv Detail & Related papers (2024-10-31T15:56:50Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Causal Discovery via Conditional Independence Testing with Proxy Variables [35.3493980628004]
The presence of unobserved variables, such as the latent confounder, can introduce bias in conditional independence testing.
We propose a novel hypothesis-testing procedure that can effectively examine the existence of the causal relationship over continuous variables.
arXiv Detail & Related papers (2023-05-09T09:08:39Z) - Exploiting Independent Instruments: Identification and Distribution
Generalization [3.701112941066256]
We exploit the independence for distribution generalization by taking into account higher moments.
We prove that the proposed estimator is invariant to distributional shifts on the instruments.
These results hold even in the under-identified case where the instruments are not sufficiently rich to identify the causal function.
arXiv Detail & Related papers (2022-02-03T21:49:04Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.