The Bias-Variance Tradeoff of Doubly Robust Estimator with Targeted
$L_1$ regularized Neural Networks Predictions
- URL: http://arxiv.org/abs/2108.00990v1
- Date: Mon, 2 Aug 2021 15:41:27 GMT
- Title: The Bias-Variance Tradeoff of Doubly Robust Estimator with Targeted
$L_1$ regularized Neural Networks Predictions
- Authors: Mehdi Rostami, Olli Saarela, Michael Escobar
- Abstract summary: The Doubly Robust (DR) estimation of ATE can be carried out in 2 steps, where in the first step, the treatment and outcome are modeled, and in the second step the predictions are inserted into the DR estimator.
The model misspecification in the first step has led researchers to utilize Machine Learning algorithms instead of parametric algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Doubly Robust (DR) estimation of ATE can be carried out in 2 steps, where
in the first step, the treatment and outcome are modeled, and in the second
step the predictions are inserted into the DR estimator. The model
misspecification in the first step has led researchers to utilize Machine
Learning algorithms instead of parametric algorithms. However, existence of
strong confounders and/or Instrumental Variables (IVs) can lead the complex ML
algorithms to provide perfect predictions for the treatment model which can
violate the positivity assumption and elevate the variance of DR estimators.
Thus the ML algorithms must be controlled to avoid perfect predictions for the
treatment model while still learn the relationship between the confounders and
the treatment and outcome.
We use two Neural network architectures and investigate how their
hyperparameters should be tuned in the presence of confounders and IVs to
achieve a low bias-variance tradeoff for ATE estimators such as DR estimator.
Through simulation results, we will provide recommendations as to how NNs can
be employed for ATE estimation.
Related papers
- Multiply Robust Estimator Circumvents Hyperparameter Tuning of Neural
Network Models in Causal Inference [0.0]
Multiply Robust (MR) estimator allows us to leverage all the first-step models in a single estimator.
We show that MR is the solution to a broad class of estimating equations, and is also consistent if one of the treatment models is $sqrtn$ consistent.
arXiv Detail & Related papers (2023-07-20T02:31:12Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Doubly Robust Collaborative Targeted Learning for Recommendation on Data
Missing Not at Random [6.563595953273317]
In recommender systems, the feedback data received is always missing not at random (MNAR)
We propose bf DR-TMLE that effectively captures the merits of both error imputation-based (EIB) and doubly robust (DR) methods.
We also propose a novel RCT-free collaborative targeted learning algorithm for DR-TMLE, called bf DR-TMLE-TL
arXiv Detail & Related papers (2022-03-19T06:48:50Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Doubly Robust Estimation with Machine Learning Predictions [0.0]
We propose the normalization of AIPW (referred to as nAIPW) which can be helpful in some scenarios.
Our simulations indicate that AIPW suffers extensively if no regularization is utilized.
arXiv Detail & Related papers (2021-08-03T22:01:55Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - When in Doubt: Neural Non-Parametric Uncertainty Quantification for
Epidemic Forecasting [70.54920804222031]
Most existing forecasting models disregard uncertainty quantification, resulting in mis-calibrated predictions.
Recent works in deep neural models for uncertainty-aware time-series forecasting also have several limitations.
We model the forecasting task as a probabilistic generative process and propose a functional neural process model called EPIFNP.
arXiv Detail & Related papers (2021-06-07T18:31:47Z) - Enhanced Doubly Robust Learning for Debiasing Post-click Conversion Rate
Estimation [29.27760413892272]
Post-click conversion, as a strong signal indicating the user preference, is salutary for building recommender systems.
Currently, most existing methods utilize counterfactual learning to debias recommender systems.
We propose a novel double learning approach for the MRDR estimator, which can convert the error imputation into the general CVR estimation.
arXiv Detail & Related papers (2021-05-28T06:59:49Z) - A comparison of Monte Carlo dropout and bootstrap aggregation on the
performance and uncertainty estimation in radiation therapy dose prediction
with deep learning neural networks [0.46180371154032895]
We propose to use Monte Carlo dropout (MCDO) and the bootstrap aggregation (bagging) technique on deep learning models to produce uncertainty estimations for radiation therapy dose prediction.
Performance-wise, bagging provides statistically significant reduced loss value and errors in most of the metrics investigated.
arXiv Detail & Related papers (2020-11-01T00:24:43Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.