Doubly Robust Estimation of Direct and Indirect Quantile Treatment
Effects with Machine Learning
- URL: http://arxiv.org/abs/2307.01049v1
- Date: Mon, 3 Jul 2023 14:27:15 GMT
- Title: Doubly Robust Estimation of Direct and Indirect Quantile Treatment
Effects with Machine Learning
- Authors: Yu-Chin Hsu and Martin Huber and Yu-Min Yen
- Abstract summary: We suggest a machine learning estimator of direct and indirect quantile treatment effects under a selection-on-observables assumption.
The proposed method is based on the efficient score functions of the cumulative distribution functions of potential outcomes.
We also propose a multiplier bootstrap for statistical inference and show the validity of the multiplier.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We suggest double/debiased machine learning estimators of direct and indirect
quantile treatment effects under a selection-on-observables assumption. This
permits disentangling the causal effect of a binary treatment at a specific
outcome rank into an indirect component that operates through an intermediate
variable called mediator and an (unmediated) direct impact. The proposed method
is based on the efficient score functions of the cumulative distribution
functions of potential outcomes, which are robust to certain misspecifications
of the nuisance parameters, i.e., the outcome, treatment, and mediator models.
We estimate these nuisance parameters by machine learning and use cross-fitting
to reduce overfitting bias in the estimation of direct and indirect quantile
treatment effects. We establish uniform consistency and asymptotic normality of
our effect estimators. We also propose a multiplier bootstrap for statistical
inference and show the validity of the multiplier bootstrap. Finally, we
investigate the finite sample performance of our method in a simulation study
and apply it to empirical data from the National Job Corp Study to assess the
direct and indirect earnings effects of training.
Related papers
- Estimating Distributional Treatment Effects in Randomized Experiments: Machine Learning for Variance Reduction [6.909352249236339]
We propose a novel regression adjustment method designed for estimating distributional treatment effect parameters in randomized experiments.
Our approach incorporates pre-treatment co-treatments into a distributional regression framework, utilizing machine learning techniques to improve the precision of distributional treatment effect estimators.
arXiv Detail & Related papers (2024-07-22T20:28:29Z) - Continuous Treatment Effect Estimation Using Gradient Interpolation and
Kernel Smoothing [43.259723628010896]
We advocate the direct approach of augmenting training individuals with independently sampled treatments and inferred counterfactual outcomes.
We evaluate our method on five benchmarks and show that our method outperforms six state-of-the-art methods on the counterfactual estimation error.
arXiv Detail & Related papers (2024-01-27T15:52:58Z) - Treatment Effect Estimation with Observational Network Data using
Machine Learning [0.0]
Causal inference methods for treatment effect estimation usually assume independent units.
We develop augmented inverse probability (AIPW) for estimation and inference of the direct effect of the treatment with observational data from a single (social) network with spillover effects.
arXiv Detail & Related papers (2022-06-29T12:52:41Z) - Assessment of Treatment Effect Estimators for Heavy-Tailed Data [70.72363097550483]
A central obstacle in the objective assessment of treatment effect (TE) estimators in randomized control trials (RCTs) is the lack of ground truth (or validation set) to test their performance.
We provide a novel cross-validation-like methodology to address this challenge.
We evaluate our methodology across 709 RCTs implemented in the Amazon supply chain.
arXiv Detail & Related papers (2021-12-14T17:53:01Z) - Counterfactual Maximum Likelihood Estimation for Training Deep Networks [83.44219640437657]
Deep learning models are prone to learning spurious correlations that should not be learned as predictive clues.
We propose a causality-based training framework to reduce the spurious correlations caused by observable confounders.
We conduct experiments on two real-world tasks: Natural Language Inference (NLI) and Image Captioning.
arXiv Detail & Related papers (2021-06-07T17:47:16Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - Double machine learning for sample selection models [0.12891210250935145]
This paper considers the evaluation of discretely distributed treatments when outcomes are only observed for a subpopulation due to sample selection or outcome attrition.
We make use of (a) Neyman-orthogonal, doubly robust, and efficient score functions, which imply the robustness of treatment effect estimation to moderate regularization biases in the machine learning-based estimation of the outcome, treatment, or sample selection models and (b) sample splitting (or cross-fitting) to prevent overfitting bias.
arXiv Detail & Related papers (2020-11-30T19:40:21Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z) - Almost-Matching-Exactly for Treatment Effect Estimation under Network
Interference [73.23326654892963]
We propose a matching method that recovers direct treatment effects from randomized experiments where units are connected in an observed network.
Our method matches units almost exactly on counts of unique subgraphs within their neighborhood graphs.
arXiv Detail & Related papers (2020-03-02T15:21:20Z) - Generalization Bounds and Representation Learning for Estimation of
Potential Outcomes and Causal Effects [61.03579766573421]
We study estimation of individual-level causal effects, such as a single patient's response to alternative medication.
We devise representation learning algorithms that minimize our bound, by regularizing the representation's induced treatment group distance.
We extend these algorithms to simultaneously learn a weighted representation to further reduce treatment group distances.
arXiv Detail & Related papers (2020-01-21T10:16:33Z) - Nonparametric inference for interventional effects with multiple
mediators [0.0]
We provide theory that allows for more flexible, possibly machine learning-based, estimation techniques.
We demonstrate multiple robustness properties of the proposed estimators.
Our work thus provides a means of leveraging modern statistical learning techniques in estimation of interventional mediation effects.
arXiv Detail & Related papers (2020-01-16T19:05:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.