Proximal Causal Inference with Hidden Mediators: Front-Door and Related
Mediation Problems
- URL: http://arxiv.org/abs/2111.02927v1
- Date: Thu, 4 Nov 2021 15:04:26 GMT
- Title: Proximal Causal Inference with Hidden Mediators: Front-Door and Related
Mediation Problems
- Authors: AmirEmad Ghassami, Ilya Shpitser, Eric Tchetgen Tchetgen
- Abstract summary: Proximal causal inference was recently proposed as a framework to identify causal effects from observational data in the presence of hidden confounders.
We establish a new hidden front-door criterion which extends the classical front-door result to allow for hidden mediators for which proxies are available.
We show that identification of certain causal effects remains possible even in settings where challenges in (i) and (ii) might co-exist.
- Score: 18.84623320851991
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Proximal causal inference was recently proposed as a framework to identify
causal effects from observational data in the presence of hidden confounders
for which proxies are available. In this paper, we extend the proximal causal
approach to settings where identification of causal effects hinges upon a set
of mediators which unfortunately are not directly observed, however proxies of
the hidden mediators are measured. Specifically, we establish (i) a new hidden
front-door criterion which extends the classical front-door result to allow for
hidden mediators for which proxies are available; (ii) We extend causal
mediation analysis to identify direct and indirect causal effects under
unconfoundedness conditions in a setting where the mediator in view is hidden,
but error prone proxies of the latter are available. We view (i) and (ii) as
important steps towards the practical application of front-door criteria and
mediation analysis as mediators are almost always error prone and thus, the
most one can hope for in practice is that our measurements are at best proxies
of mediating mechanisms. Finally, we show that identification of certain causal
effects remains possible even in settings where challenges in (i) and (ii)
might co-exist.
Related papers
- Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - Automating the Selection of Proxy Variables of Unmeasured Confounders [16.773841751009748]
We extend the existing proxy variable estimator to accommodate scenarios where multiple unmeasured confounders exist between the treatments and the outcome.
We propose two data-driven methods for the selection of proxy variables and for the unbiased estimation of causal effects.
arXiv Detail & Related papers (2024-05-25T08:53:49Z) - Sim-to-Real Causal Transfer: A Metric Learning Approach to
Causally-Aware Interaction Representations [62.48505112245388]
We take an in-depth look at the causal awareness of modern representations of agent interactions.
We show that recent representations are already partially resilient to perturbations of non-causal agents.
We propose a metric learning approach that regularizes latent representations with causal annotations.
arXiv Detail & Related papers (2023-12-07T18:57:03Z) - Partial Identification of Causal Effects Using Proxy Variables [19.23377338970307]
Proximal causal inference is a recently proposed framework for evaluating causal effects in the presence of unmeasured confounding.
In this paper, we propose partial identification methods that do not require completeness and obviate the need for identification of a bridge function.
arXiv Detail & Related papers (2023-04-10T04:18:27Z) - Disentangled Representation for Causal Mediation Analysis [25.114619307838602]
Causal mediation analysis is a method that is often used to reveal direct and indirect effects.
Deep learning shows promise in mediation analysis, but the current methods only assume latent confounders that affect treatment, mediator and outcome simultaneously.
We propose the Disentangled Mediation Analysis Variational AutoEncoder (DMAVAE), which disentangles the representations of latent confounders into three types to accurately estimate the natural direct effect, natural indirect effect and total effect.
arXiv Detail & Related papers (2023-02-19T23:37:17Z) - Neighborhood Adaptive Estimators for Causal Inference under Network
Interference [152.4519491244279]
We consider the violation of the classical no-interference assumption, meaning that the treatment of one individuals might affect the outcomes of another.
To make interference tractable, we consider a known network that describes how interference may travel.
We study estimators for the average direct treatment effect on the treated in such a setting.
arXiv Detail & Related papers (2022-12-07T14:53:47Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - Inferring Effect Ordering Without Causal Effect Estimation [1.6114012813668932]
Predictive models are often employed to guide interventions across various domains, such as advertising, customer retention, and personalized medicine.
Our paper addresses the question of when and how these predictive models can be interpreted causally.
We formalize two assumptions, full latent mediation and latent monotonicity, that are jointly sufficient for inferring effect ordering without direct causal effect estimation.
arXiv Detail & Related papers (2022-06-25T02:15:22Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.