Combining Doubly Robust Methods and Machine Learning for Estimating
Average Treatment Effects for Observational Real-world Data
- URL: http://arxiv.org/abs/2204.10969v4
- Date: Wed, 10 Jan 2024 03:44:43 GMT
- Title: Combining Doubly Robust Methods and Machine Learning for Estimating
Average Treatment Effects for Observational Real-world Data
- Authors: Xiaoqing Tan, Shu Yang, Wenyu Ye, Douglas E. Faries, Ilya Lipkovich,
Zbigniew Kadziola
- Abstract summary: We show how machine learning can be used to boost the performance of doubly robust estimators.
We provide guidance on how to apply doubly robust estimators.
- Score: 3.487090989628347
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Observational cohort studies are increasingly being used for comparative
effectiveness research to assess the safety of therapeutics. Recently, various
doubly robust methods have been proposed for average treatment effect
estimation by combining the treatment model and the outcome model via different
vehicles, such as matching, weighting, and regression. The key advantage of
doubly robust estimators is that they require either the treatment model or the
outcome model to be correctly specified to obtain a consistent estimator of
average treatment effects, and therefore lead to a more accurate and often more
precise inference. However, little work has been done to understand how doubly
robust estimators differ due to their unique strategies of using the treatment
and outcome models and how machine learning techniques can be combined to boost
their performance. Here we examine multiple popular doubly robust methods and
compare their performance using different treatment and outcome modeling via
extensive simulations and a real-world application. We found that incorporating
machine learning with doubly robust estimators such as the targeted maximum
likelihood estimator gives the best overall performance. Practical guidance on
how to apply doubly robust estimators is provided.
Related papers
- Estimating Distributional Treatment Effects in Randomized Experiments: Machine Learning for Variance Reduction [6.909352249236339]
We propose a novel regression adjustment method designed for estimating distributional treatment effect parameters in randomized experiments.
Our approach incorporates pre-treatment co-treatments into a distributional regression framework, utilizing machine learning techniques to improve the precision of distributional treatment effect estimators.
arXiv Detail & Related papers (2024-07-22T20:28:29Z) - Continuous Treatment Effect Estimation Using Gradient Interpolation and
Kernel Smoothing [43.259723628010896]
We advocate the direct approach of augmenting training individuals with independently sampled treatments and inferred counterfactual outcomes.
We evaluate our method on five benchmarks and show that our method outperforms six state-of-the-art methods on the counterfactual estimation error.
arXiv Detail & Related papers (2024-01-27T15:52:58Z) - Counterfactual Data Augmentation with Contrastive Learning [27.28511396131235]
We introduce a model-agnostic data augmentation method that imputes the counterfactual outcomes for a selected subset of individuals.
We use contrastive learning to learn a representation space and a similarity measure such that in the learned representation space close individuals identified by the learned similarity measure have similar potential outcomes.
This property ensures reliable imputation of counterfactual outcomes for the individuals with close neighbors from the alternative treatment group.
arXiv Detail & Related papers (2023-11-07T00:36:51Z) - A Double Machine Learning Approach to Combining Experimental and Observational Data [59.29868677652324]
We propose a double machine learning approach to combine experimental and observational studies.
Our framework tests for violations of external validity and ignorability under milder assumptions.
arXiv Detail & Related papers (2023-07-04T02:53:11Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Matched Machine Learning: A Generalized Framework for Treatment Effect
Inference With Learned Metrics [87.05961347040237]
We introduce Matched Machine Learning, a framework that combines the flexibility of machine learning black boxes with the interpretability of matching.
Our framework uses machine learning to learn an optimal metric for matching units and estimating outcomes.
We show empirically that instances of Matched Machine Learning perform on par with black-box machine learning methods and better than existing matching methods for similar problems.
arXiv Detail & Related papers (2023-04-03T19:32:30Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Evaluating (weighted) dynamic treatment effects by double machine
learning [0.12891210250935145]
We consider evaluating the causal effects of dynamic treatments in a data-driven way under a selection-on-observables assumption.
We make use of so-called Neyman-orthogonal score functions, which imply the robustness of treatment effect estimation to moderate (local) misspecifications.
We demonstrate that the estimators are regularityally normal and $sqrtn$-consistent under specific conditions.
arXiv Detail & Related papers (2020-12-01T09:55:40Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.