Efficient adjustment for complex covariates: Gaining efficiency with
DOPE
- URL: http://arxiv.org/abs/2402.12980v1
- Date: Tue, 20 Feb 2024 13:02:51 GMT
- Title: Efficient adjustment for complex covariates: Gaining efficiency with
DOPE
- Authors: Alexander Mangulad Christgau and Niels Richard Hansen
- Abstract summary: We propose a framework that accommodates adjustment for any subset of information expressed by the covariates.
Based on our theoretical results, we propose the Debiased Outcome-adapted Propensity Estorimator (DOPE) for efficient estimation of the average treatment effect (ATE)
Our results show that the DOPE provides an efficient and robust methodology for ATE estimation in various observational settings.
- Score: 56.537164957672715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Covariate adjustment is a ubiquitous method used to estimate the average
treatment effect (ATE) from observational data. Assuming a known graphical
structure of the data generating model, recent results give graphical criteria
for optimal adjustment, which enables efficient estimation of the ATE. However,
graphical approaches are challenging for high-dimensional and complex data, and
it is not straightforward to specify a meaningful graphical model of
non-Euclidean data such as texts. We propose an general framework that
accommodates adjustment for any subset of information expressed by the
covariates. We generalize prior works and leverage these results to identify
the optimal covariate information for efficient adjustment. This information is
minimally sufficient for prediction of the outcome conditionally on treatment.
Based on our theoretical results, we propose the Debiased Outcome-adapted
Propensity Estimator (DOPE) for efficient estimation of the ATE, and we provide
asymptotic results for the DOPE under general conditions. Compared to the
augmented inverse propensity weighted (AIPW) estimator, the DOPE can retain its
efficiency even when the covariates are highly predictive of treatment. We
illustrate this with a single-index model, and with an implementation of the
DOPE based on neural networks, we demonstrate its performance on simulated and
real data. Our results show that the DOPE provides an efficient and robust
methodology for ATE estimation in various observational settings.
Related papers
- Deep Learning Methods for the Noniterative Conditional Expectation G-Formula for Causal Inference from Complex Observational Data [3.0958655016140892]
The g-formula can be used to estimate causal effects of sustained treatment strategies using observational data.
Parametric models are subject to model misspecification, which may result in biased causal estimates.
We propose a unified deep learning framework for the NICE g-formula estimator.
arXiv Detail & Related papers (2024-10-28T21:00:46Z) - Adaptive-TMLE for the Average Treatment Effect based on Randomized Controlled Trial Augmented with Real-World Data [0.0]
We consider the problem of estimating the average treatment effect (ATE) when both randomized control trial (RCT) data and real-world data (RWD) are available.
We introduce an adaptive targeted minimum loss-based estimation framework to estimate them.
arXiv Detail & Related papers (2024-05-12T07:10:26Z) - C-XGBoost: A tree boosting model for causal effect estimation [8.246161706153805]
Causal effect estimation aims at estimating the Average Treatment Effect as well as the Conditional Average Treatment Effect of a treatment to an outcome from the available data.
We propose a new causal inference model, named C-XGBoost, for the prediction of potential outcomes.
arXiv Detail & Related papers (2024-03-31T17:43:37Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Estimate-Then-Optimize versus Integrated-Estimation-Optimization versus
Sample Average Approximation: A Stochastic Dominance Perspective [15.832111591654293]
We show that a reverse behavior appears when the model class is well-specified and there is sufficient data.
We also demonstrate how standard sample average approximation (SAA) performs the worst when the model class is well-specified in terms of regret.
arXiv Detail & Related papers (2023-04-13T21:54:53Z) - Treatment-RSPN: Recurrent Sum-Product Networks for Sequential Treatment
Regimes [3.7004311481324677]
Sum-product networks (SPNs) have emerged as a novel deep learning architecture enabling highly efficient probabilistic inference.
We propose a general framework for modelling sequential treatment decision-making behaviour and treatment response using RSPNs.
We evaluate our approach on a synthetic dataset as well as real-world data from the MIMIC-IV intensive care unit medical database.
arXiv Detail & Related papers (2022-11-14T00:18:44Z) - Extension of Dynamic Mode Decomposition for dynamic systems with
incomplete information based on t-model of optimal prediction [69.81996031777717]
The Dynamic Mode Decomposition has proved to be a very efficient technique to study dynamic data.
The application of this approach becomes problematic if the available data is incomplete because some dimensions of smaller scale either missing or unmeasured.
We consider a first-order approximation of the Mori-Zwanzig decomposition, state the corresponding optimization problem and solve it with the gradient-based optimization method.
arXiv Detail & Related papers (2022-02-23T11:23:59Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.