Nonparametric inverse probability weighted estimators based on the
highly adaptive lasso
- URL: http://arxiv.org/abs/2005.11303v2
- Date: Sat, 3 Jul 2021 07:35:38 GMT
- Title: Nonparametric inverse probability weighted estimators based on the
highly adaptive lasso
- Authors: Ashkan Ertefaie, Nima S. Hejazi, Mark J. van der Laan
- Abstract summary: Inparametric inverse probability weighted estimators are known to be inefficient and suffer from the curse of dimensionality.
We propose a class of nonparametric inverse probability weighted estimators in which the weighting mechanism is estimated via undersmoothing of the highly adaptive lasso.
Our developments have broad implications for the construction of efficient inverse probability weighted estimators in large statistical models and a variety of problem settings.
- Score: 0.966840768820136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse probability weighted estimators are the oldest and potentially most
commonly used class of procedures for the estimation of causal effects. By
adjusting for selection biases via a weighting mechanism, these procedures
estimate an effect of interest by constructing a pseudo-population in which
selection biases are eliminated. Despite their ease of use, these estimators
require the correct specification of a model for the weighting mechanism, are
known to be inefficient, and suffer from the curse of dimensionality. We
propose a class of nonparametric inverse probability weighted estimators in
which the weighting mechanism is estimated via undersmoothing of the highly
adaptive lasso, a nonparametric regression function proven to converge at
$n^{-1/3}$-rate to the true weighting mechanism. We demonstrate that our
estimators are asymptotically linear with variance converging to the
nonparametric efficiency bound. Unlike doubly robust estimators, our procedures
require neither derivation of the efficient influence function nor
specification of the conditional outcome model. Our theoretical developments
have broad implications for the construction of efficient inverse probability
weighted estimators in large statistical models and a variety of problem
settings. We assess the practical performance of our estimators in simulation
studies and demonstrate use of our proposed methodology with data from a
large-scale epidemiologic study.
Related papers
- Non-parametric efficient estimation of marginal structural models with multi-valued time-varying treatments [0.9558392439655012]
We use machine learning together with recent developments in semi-parametric efficiency theory for longitudinal studies to propose such an estimator.
We show conditions under which the proposed estimator is efficient as anally normal, and sequentially doubly robust in the sense that it is consistent if, for each time point, either the outcome or the treatment mechanism is consistently estimated.
arXiv Detail & Related papers (2024-09-27T14:29:12Z) - Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Hyperparameter Tuning and Model Evaluation in Causal Effect Estimation [2.7823528791601686]
This paper investigates the interplay between the four different aspects of model evaluation for causal effect estimation.
We find that most causal estimators are roughly equivalent in performance if tuned thoroughly enough.
We call for more research into causal model evaluation to unlock the optimum performance not currently being delivered even by state-of-the-art procedures.
arXiv Detail & Related papers (2023-03-02T17:03:02Z) - A Bayesian Semiparametric Method For Estimating Causal Quantile Effects [1.1118668841431563]
We propose a semiparametric conditional distribution regression model that allows inference on any functionals of counterfactual distributions.
We show via simulations that the use of double balancing score for confounding adjustment improves performance over adjusting for any single score alone.
We apply the proposed method to the North Carolina birth weight dataset to analyze the effect of maternal smoking on infant's birth weight.
arXiv Detail & Related papers (2022-11-03T05:15:18Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Weight-of-evidence 2.0 with shrinkage and spline-binning [3.925373521409752]
We propose a formalized, data-driven and powerful method to transform categorical predictors.
We extend upon the weight-of-evidence approach and propose to estimate the proportions using shrinkage estimators.
We present the results of a series of experiments in a fraud detection setting, which illustrate the effectiveness of the presented approach.
arXiv Detail & Related papers (2021-01-05T13:13:16Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.