Finding Valid Adjustments under Non-ignorability with Minimal DAG
Knowledge
- URL: http://arxiv.org/abs/2106.11560v1
- Date: Tue, 22 Jun 2021 06:32:06 GMT
- Title: Finding Valid Adjustments under Non-ignorability with Minimal DAG
Knowledge
- Authors: Abhin Shah, Karthikeyan Shanmugam, Kartik Ahuja
- Abstract summary: Treatment effect estimation from observational data is a fundamental problem in causal inference.
We show that even if we know only one parent of the treatment variable (provided by an expert), then quite remarkably it suffices to test a broad class of back-door criteria.
- Score: 24.602623636437315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Treatment effect estimation from observational data is a fundamental problem
in causal inference. There are two very different schools of thought that have
tackled this problem. On the one hand, the Pearlian framework commonly assumes
structural knowledge (provided by an expert) in the form of Directed Acyclic
Graphs (DAGs) and provides graphical criteria such as the back-door criterion
to identify the valid adjustment sets. On the other hand, the potential
outcomes (PO) framework commonly assumes that all the observed features satisfy
ignorability (i.e., no hidden confounding), which in general is untestable. In
this work, we take steps to bridge these two frameworks. We show that even if
we know only one parent of the treatment variable (provided by an expert), then
quite remarkably it suffices to test a broad class of (but not all) back-door
criteria. Importantly, we also cover the non-trivial case where the entire set
of observed features is not ignorable (generalizing the PO framework) without
requiring all the parents of the treatment variable to be observed. Our key
technical idea involves a more general result -- Given a synthetic sub-sampling
(or environment) variable that is a function of the parent variable, we show
that an invariance test involving this sub-sampling variable is equivalent to
testing a broad class of back-door criteria. We demonstrate our approach on
synthetic data as well as real causal effect estimation benchmarks.
Related papers
- Detecting and Identifying Selection Structure in Sequential Data [53.24493902162797]
We argue that the selective inclusion of data points based on latent objectives is common in practical situations, such as music sequences.
We show that selection structure is identifiable without any parametric assumptions or interventional experiments.
We also propose a provably correct algorithm to detect and identify selection structures as well as other types of dependencies.
arXiv Detail & Related papers (2024-06-29T20:56:34Z) - A Versatile Causal Discovery Framework to Allow Causally-Related Hidden
Variables [28.51579090194802]
We introduce a novel framework for causal discovery that accommodates the presence of causally-related hidden variables almost everywhere in the causal network.
We develop a Rank-based Latent Causal Discovery algorithm, RLCD, that can efficiently locate hidden variables, determine their cardinalities, and discover the entire causal structure over both measured and hidden ones.
Experimental results on both synthetic and real-world personality data sets demonstrate the efficacy of the proposed approach in finite-sample cases.
arXiv Detail & Related papers (2023-12-18T07:57:39Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Causal Effect Estimation with Variational AutoEncoder and the Front Door
Criterion [23.20371860838245]
The front-door criterion is often difficult to identify the set of variables used for front-door adjustment from data.
By leveraging the ability of deep generative models in representation learning, we propose FDVAE to learn the representation of a Front-Door adjustment set with a Variational AutoEncoder.
arXiv Detail & Related papers (2023-04-24T10:04:28Z) - BaCaDI: Bayesian Causal Discovery with Unknown Interventions [118.93754590721173]
BaCaDI operates in the continuous space of latent probabilistic representations of both causal structures and interventions.
In experiments on synthetic causal discovery tasks and simulated gene-expression data, BaCaDI outperforms related methods in identifying causal structures and intervention targets.
arXiv Detail & Related papers (2022-06-03T16:25:48Z) - Causal Transportability for Visual Recognition [70.13627281087325]
We show that standard classifiers fail because the association between images and labels is not transportable across settings.
We then show that the causal effect, which severs all sources of confounding, remains invariant across domains.
This motivates us to develop an algorithm to estimate the causal effect for image classification.
arXiv Detail & Related papers (2022-04-26T15:02:11Z) - On Testability of the Front-Door Model via Verma Constraints [7.52579126252489]
Front-door criterion can be used to identify and compute causal effects despite unmeasured confounders.
Key assumptions -- the existence of a variable that fully mediates the effect of the treatment on the outcome, and which simultaneously does not suffer from similar issues of confounding -- are often deemed implausible.
We show that under mild conditions involving an auxiliary variable, the assumptions encoded in the front-door model may be tested via generalized equality constraints.
arXiv Detail & Related papers (2022-03-01T00:38:29Z) - Effect Identification in Cluster Causal Diagrams [51.42809552422494]
We introduce a new type of graphical model called cluster causal diagrams (for short, C-DAGs)
C-DAGs allow for the partial specification of relationships among variables based on limited prior knowledge.
We develop the foundations and machinery for valid causal inferences over C-DAGs.
arXiv Detail & Related papers (2022-02-22T21:27:31Z) - Deconfounded Score Method: Scoring DAGs with Dense Unobserved
Confounding [101.35070661471124]
We show that unobserved confounding leaves a characteristic footprint in the observed data distribution that allows for disentangling spurious and causal effects.
We propose an adjusted score-based causal discovery algorithm that may be implemented with general-purpose solvers and scales to high-dimensional problems.
arXiv Detail & Related papers (2021-03-28T11:07:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.