Robust Causal Learning for the Estimation of Average Treatment Effects
- URL: http://arxiv.org/abs/2209.01805v1
- Date: Mon, 5 Sep 2022 07:35:58 GMT
- Title: Robust Causal Learning for the Estimation of Average Treatment Effects
- Authors: Yiyan Huang, Cheuk Hang Leung, Xing Yan, Qi Wu, Shumin Ma, Zhiri Yuan,
Dongdong Wang, Zhixiang Huang
- Abstract summary: We propose a Robust Causal Learning (RCL) method to offset the deficiencies of the Double/Debiased Machine Learning (DML) estimators.
Empirically, the comprehensive experiments show that i) the RCL estimators give more stable estimations of the causal parameters than the DML estimators.
- Score: 14.96459402684986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many practical decision-making problems in economics and healthcare seek to
estimate the average treatment effect (ATE) from observational data. The
Double/Debiased Machine Learning (DML) is one of the prevalent methods to
estimate ATE in the observational study. However, the DML estimators can suffer
an error-compounding issue and even give an extreme estimate when the
propensity scores are misspecified or very close to 0 or 1. Previous studies
have overcome this issue through some empirical tricks such as propensity score
trimming, yet none of the existing literature solves this problem from a
theoretical standpoint. In this paper, we propose a Robust Causal Learning
(RCL) method to offset the deficiencies of the DML estimators. Theoretically,
the RCL estimators i) are as consistent and doubly robust as the DML
estimators, and ii) can get rid of the error-compounding issue. Empirically,
the comprehensive experiments show that i) the RCL estimators give more stable
estimations of the causal parameters than the DML estimators, and ii) the RCL
estimators outperform the traditional estimators and their variants when
applying different machine learning models on both simulation and benchmark
datasets.
Related papers
- Estimating Causal Effects with Double Machine Learning -- A Method Evaluation [5.904095466127043]
We review one of the most prominent methods - "double/debiased machine learning" (DML)
Our findings indicate that the application of a suitably flexible machine learning algorithm within DML improves the adjustment for various nonlinear confounding relationships.
When estimating the effects of air pollution on housing prices, we find that DML estimates are consistently larger than estimates of less flexible methods.
arXiv Detail & Related papers (2024-03-21T13:21:33Z) - Calibrating doubly-robust estimators with unbalanced treatment assignment [0.0]
We propose a simple extension of the DML estimator which undersamples data for propensity score modeling.
The paper provides theoretical results showing that the estimator retains the estimator's properties and calibrates scores to match the original distribution.
arXiv Detail & Related papers (2024-03-03T18:40:11Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Hyperparameter Tuning and Model Evaluation in Causal Effect Estimation [2.7823528791601686]
This paper investigates the interplay between the four different aspects of model evaluation for causal effect estimation.
We find that most causal estimators are roughly equivalent in performance if tuned thoroughly enough.
We call for more research into causal model evaluation to unlock the optimum performance not currently being delivered even by state-of-the-art procedures.
arXiv Detail & Related papers (2023-03-02T17:03:02Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - The Bias-Variance Tradeoff of Doubly Robust Estimator with Targeted
$L_1$ regularized Neural Networks Predictions [0.0]
The Doubly Robust (DR) estimation of ATE can be carried out in 2 steps, where in the first step, the treatment and outcome are modeled, and in the second step the predictions are inserted into the DR estimator.
The model misspecification in the first step has led researchers to utilize Machine Learning algorithms instead of parametric algorithms.
arXiv Detail & Related papers (2021-08-02T15:41:27Z) - Counterfactual Maximum Likelihood Estimation for Training Deep Networks [83.44219640437657]
Deep learning models are prone to learning spurious correlations that should not be learned as predictive clues.
We propose a causality-based training framework to reduce the spurious correlations caused by observable confounders.
We conduct experiments on two real-world tasks: Natural Language Inference (NLI) and Image Captioning.
arXiv Detail & Related papers (2021-06-07T17:47:16Z) - Performance metrics for intervention-triggering prediction models do not
reflect an expected reduction in outcomes from using the model [71.9860741092209]
Clinical researchers often select among and evaluate risk prediction models.
Standard metrics calculated from retrospective data are only related to model utility under certain assumptions.
When predictions are delivered repeatedly throughout time, the relationship between standard metrics and utility is further complicated.
arXiv Detail & Related papers (2020-06-02T16:26:49Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.