Testability of Instrumental Variables in Additive Nonlinear, Non-Constant Effects Models
- URL: http://arxiv.org/abs/2411.12184v1
- Date: Tue, 19 Nov 2024 02:56:45 GMT
- Title: Testability of Instrumental Variables in Additive Nonlinear, Non-Constant Effects Models
- Authors: Xichen Guo, Zheng Li, Biwei Huang, Yan Zeng, Zhi Geng, Feng Xie,
- Abstract summary: We propose an Auxiliary-based Independence Test (AIT) condition to test whether a variable is a valid instrument.
Experimental results on both synthetic and three different real-world datasets show the effectiveness of our proposed condition.
- Score: 18.29392314939638
- License:
- Abstract: We address the issue of the testability of instrumental variables derived from observational data. Most existing testable implications are centered on scenarios where the treatment is a discrete variable, e.g., instrumental inequality (Pearl, 1995), or where the effect is assumed to be constant, e.g., instrumental variables condition based on the principle of independent mechanisms (Burauel, 2023). However, treatments can often be continuous variables, such as drug dosages or nutritional content levels, and non-constant effects may occur in many real-world scenarios. In this paper, we consider an additive nonlinear, non-constant effects model with unmeasured confounders, in which treatments can be either discrete or continuous, and propose an Auxiliary-based Independence Test (AIT) condition to test whether a variable is a valid instrument. We first show that if the candidate instrument is valid, then the AIT condition holds. Moreover, we illustrate the implications of the AIT condition and demonstrate that, in certain conditions, AIT conditions are necessary and sufficient to detect all invalid IVs. We also extend the AIT condition to include covariates and introduce a practical testing algorithm. Experimental results on both synthetic and three different real-world datasets show the effectiveness of our proposed condition.
Related papers
- A Bayesian Classification Trees Approach to Treatment Effect Variation with Noncompliance [0.5356944479760104]
Estimating varying treatment effects in randomized trials with noncompliance is inherently challenging.
Existing flexible machine learning methods are highly sensitive to the weak instruments problem.
We present a Bayesian Causal Forest model for binary response variables in scenarios with noncompliance.
arXiv Detail & Related papers (2024-08-14T18:33:55Z) - Meta-Learners for Partially-Identified Treatment Effects Across Multiple Environments [67.80453452949303]
Estimating the conditional average treatment effect (CATE) from observational data is relevant for many applications such as personalized medicine.
Here, we focus on the widespread setting where the observational data come from multiple environments.
We propose different model-agnostic learners (so-called meta-learners) to estimate the bounds that can be used in combination with arbitrary machine learning models.
arXiv Detail & Related papers (2024-06-04T16:31:43Z) - Identification of Single-Treatment Effects in Factorial Experiments [0.0]
I show that when multiple interventions are randomized in experiments, the effect any single intervention would have outside the experimental setting is not identified absent heroic assumptions.
observational studies and factorial experiments provide information about potential-outcome distributions with zero and multiple interventions.
I show that researchers who rely on this type of design have to justify either linearity of functional forms or specify with Directed Acyclic Graphs how variables are related in the real world.
arXiv Detail & Related papers (2024-05-16T04:01:53Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Conditional Independence Testing via Latent Representation Learning [2.566492438263125]
LCIT (Latent representation based Conditional Independence Test) is a novel non-parametric method for conditional independence testing based on representation learning.
Our main contribution involves proposing a generative framework in which to test for the independence between X and Y given Z.
arXiv Detail & Related papers (2022-09-04T07:16:03Z) - Robust and Agnostic Learning of Conditional Distributional Treatment
Effects [62.44901952244514]
The conditional average treatment effect (CATE) is the best point prediction of individual causal effects.
In aggregate analyses, this is usually addressed by measuring distributional treatment effect (DTE)
We provide a new robust and model-agnostic methodology for learning the conditional DTE (CDTE) for a wide class of problems.
arXiv Detail & Related papers (2022-05-23T17:40:31Z) - On Testability of the Front-Door Model via Verma Constraints [7.52579126252489]
Front-door criterion can be used to identify and compute causal effects despite unmeasured confounders.
Key assumptions -- the existence of a variable that fully mediates the effect of the treatment on the outcome, and which simultaneously does not suffer from similar issues of confounding -- are often deemed implausible.
We show that under mild conditions involving an auxiliary variable, the assumptions encoded in the front-door model may be tested via generalized equality constraints.
arXiv Detail & Related papers (2022-03-01T00:38:29Z) - Instrumental Variable Value Iteration for Causal Offline Reinforcement Learning [107.70165026669308]
In offline reinforcement learning (RL) an optimal policy is learned solely from a priori collected observational data.
We study a confounded Markov decision process where the transition dynamics admit an additive nonlinear functional form.
We propose a provably efficient IV-aided Value Iteration (IVVI) algorithm based on a primal-dual reformulation of the conditional moment restriction.
arXiv Detail & Related papers (2021-02-19T13:01:40Z) - Valid Causal Inference with (Some) Invalid Instruments [24.794879633855373]
We show how to perform consistent IV estimation despite violations of the exclusion assumption.
We achieve accurate estimates of conditional average treatment effects using an ensemble of deep network-based estimators.
arXiv Detail & Related papers (2020-06-19T21:09:26Z) - Stable Prediction via Leveraging Seed Variable [73.9770220107874]
Previous machine learning methods might exploit subtly spurious correlations in training data induced by non-causal variables for prediction.
We propose a conditional independence test based algorithm to separate causal variables with a seed variable as priori, and adopt them for stable prediction.
Our algorithm outperforms state-of-the-art methods for stable prediction.
arXiv Detail & Related papers (2020-06-09T06:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.