Automatic Double Machine Learning for Continuous Treatment Effects
- URL: http://arxiv.org/abs/2104.10334v1
- Date: Wed, 21 Apr 2021 03:17:40 GMT
- Title: Automatic Double Machine Learning for Continuous Treatment Effects
- Authors: Sylvia Klosin
- Abstract summary: We introduce and prove normality for a new nonparametric estimator of continuous treatment effects.
We estimate the average dose-response function - the expected value of an outcome of interest at a particular level of the treatment level.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce and prove asymptotic normality for a new
nonparametric estimator of continuous treatment effects. Specifically, we
estimate the average dose-response function - the expected value of an outcome
of interest at a particular level of the treatment level. We utilize tools from
both the double debiased machine learning (DML) and the automatic double
machine learning (ADML) literatures to construct our estimator. Our estimator
utilizes a novel debiasing method that leads to nice theoretical stability and
balancing properties. In simulations our estimator performs well compared to
current methods.
Related papers
- Optimizing Pretraining Data Mixtures with LLM-Estimated Utility [52.08428597962423]
Large Language Models improve with increasing amounts of high-quality training data.
We find token-counts outperform manual and learned mixes, indicating that simple approaches for dataset size and diversity are surprisingly effective.
We propose two complementary approaches: UtiliMax, which extends token-based $200s by incorporating utility estimates from reduced-scale ablations, achieving up to a 10.6x speedup over manual baselines; and Model Estimated Data Utility (MEDU), which leverages LLMs to estimate data utility from small samples, matching ablation-based performance while reducing computational requirements by $simx.
arXiv Detail & Related papers (2025-01-20T21:10:22Z) - Semiparametric inference for impulse response functions using double/debiased machine learning [49.1574468325115]
We introduce a machine learning estimator for the impulse response function (IRF) in settings where a time series of interest is subjected to multiple discrete treatments.
The proposed estimator can rely on fully nonparametric relations between treatment and outcome variables, opening up the possibility to use flexible machine learning approaches to estimate IRFs.
arXiv Detail & Related papers (2024-11-15T07:42:02Z) - Automatic doubly robust inference for linear functionals via calibrated debiased machine learning [0.9694940903078658]
We propose a debiased machine learning estimator for doubly robust inference.
A C-DML estimator maintains linearity when either the outcome regression or the Riesz representer of the linear functional is estimated sufficiently well.
Our theoretical and empirical results support the use of C-DML to mitigate bias arising from the inconsistent or slow estimation of nuisance functions.
arXiv Detail & Related papers (2024-11-05T03:32:30Z) - Improving the Finite Sample Estimation of Average Treatment Effects using Double/Debiased Machine Learning with Propensity Score Calibration [0.0]
This paper investigates the use of probability calibration approaches within the Double/debiased machine learning framework.
We show that calibrating propensity scores may significantly reduce the root mean squared error of DML estimates.
We showcase it in an empirical example and provide conditions under which calibration does not alter the properties of the DML estimator.
arXiv Detail & Related papers (2024-09-07T17:44:01Z) - Estimating Distributional Treatment Effects in Randomized Experiments: Machine Learning for Variance Reduction [6.909352249236339]
We propose a novel regression adjustment method designed for estimating distributional treatment effect parameters in randomized experiments.
Our approach incorporates pre-treatment co-treatments into a distributional regression framework, utilizing machine learning techniques to improve the precision of distributional treatment effect estimators.
arXiv Detail & Related papers (2024-07-22T20:28:29Z) - Improving Bias Correction Standards by Quantifying its Effects on Treatment Outcomes [54.18828236350544]
Propensity score matching (PSM) addresses selection biases by selecting comparable populations for analysis.
Different matching methods can produce significantly different Average Treatment Effects (ATE) for the same task, even when meeting all validation criteria.
To address this issue, we introduce a novel metric, A2A, to reduce the number of valid matches.
arXiv Detail & Related papers (2024-07-20T12:42:24Z) - Calibrating doubly-robust estimators with unbalanced treatment assignment [0.0]
We propose a simple extension of the DML estimator which undersamples data for propensity score modeling.
The paper provides theoretical results showing that the estimator retains the estimator's properties and calibrates scores to match the original distribution.
arXiv Detail & Related papers (2024-03-03T18:40:11Z) - A Semiparametric Instrumented Difference-in-Differences Approach to
Policy Learning [2.1989182578668243]
We propose a general instrumented difference-in-differences (DiD) approach for learning the optimal treatment policy.
Specifically, we establish identification results using a binary instrumental variable (IV) when the parallel trends assumption fails to hold.
We also construct a Wald estimator, novel inverse probability estimators, and a class of semi efficient and multiply robust estimators.
arXiv Detail & Related papers (2023-10-14T09:38:32Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.