Automatic doubly robust inference for linear functionals via calibrated debiased machine learning
- URL: http://arxiv.org/abs/2411.02771v1
- Date: Tue, 05 Nov 2024 03:32:30 GMT
- Title: Automatic doubly robust inference for linear functionals via calibrated debiased machine learning
- Authors: Lars van der Laan, Alex Luedtke, Marco Carone,
- Abstract summary: We propose a debiased machine learning estimator for doubly robust inference.
A C-DML estimator maintains linearity when either the outcome regression or the Riesz representer of the linear functional is estimated sufficiently well.
Our theoretical and empirical results support the use of C-DML to mitigate bias arising from the inconsistent or slow estimation of nuisance functions.
- Score: 0.9694940903078658
- License:
- Abstract: In causal inference, many estimands of interest can be expressed as a linear functional of the outcome regression function; this includes, for example, average causal effects of static, dynamic and stochastic interventions. For learning such estimands, in this work, we propose novel debiased machine learning estimators that are doubly robust asymptotically linear, thus providing not only doubly robust consistency but also facilitating doubly robust inference (e.g., confidence intervals and hypothesis tests). To do so, we first establish a key link between calibration, a machine learning technique typically used in prediction and classification tasks, and the conditions needed to achieve doubly robust asymptotic linearity. We then introduce calibrated debiased machine learning (C-DML), a unified framework for doubly robust inference, and propose a specific C-DML estimator that integrates cross-fitting, isotonic calibration, and debiased machine learning estimation. A C-DML estimator maintains asymptotic linearity when either the outcome regression or the Riesz representer of the linear functional is estimated sufficiently well, allowing the other to be estimated at arbitrarily slow rates or even inconsistently. We propose a simple bootstrap-assisted approach for constructing doubly robust confidence intervals. Our theoretical and empirical results support the use of C-DML to mitigate bias arising from the inconsistent or slow estimation of nuisance functions.
Related papers
- Automatic debiasing of neural networks via moment-constrained learning [0.0]
Naively learning the regression function and taking a sample mean of the target functional results in biased estimators.
We propose moment-constrained learning as a new RR learning approach that addresses some shortcomings in automatic debiasing.
arXiv Detail & Related papers (2024-09-29T20:56:54Z) - Improving the Finite Sample Performance of Double/Debiased Machine Learning with Propensity Score Calibration [0.0]
Double/debiased machine learning (DML) uses a double-robust score function that relies on the prediction of nuisance functions.
Estimators relying on double-robust score functions are highly sensitive to errors in propensity score predictions.
This paper investigates the use of probability calibration approaches within the DML framework.
arXiv Detail & Related papers (2024-09-07T17:44:01Z) - Doubly Robust Proximal Causal Learning for Continuous Treatments [56.05592840537398]
We propose a kernel-based doubly robust causal learning estimator for continuous treatments.
We show that its oracle form is a consistent approximation of the influence function.
We then provide a comprehensive convergence analysis in terms of the mean square error.
arXiv Detail & Related papers (2023-09-22T12:18:53Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Automatic Debiased Machine Learning for Dynamic Treatment Effects and
General Nested Functionals [23.31865419578237]
We extend the idea of automated debiased machine learning to the dynamic treatment regime and more generally to nested functionals.
We show that the multiply robust formula for the dynamic treatment regime with discrete treatments can be re-stated in terms of a Riesz representer characterization of nested mean regressions.
arXiv Detail & Related papers (2022-03-25T19:54:17Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Mostly Harmless Machine Learning: Learning Optimal Instruments in Linear
IV Models [3.7599363231894176]
We offer theoretical results that justify incorporating machine learning in the standard linear instrumental variable setting.
We use machine learning, combined with sample-splitting, to predict the treatment variable from the instrument.
This allows the researcher to extract non-linear co-variation between the treatment and instrument.
arXiv Detail & Related papers (2020-11-12T01:55:11Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z) - Distributional Robustness and Regularization in Reinforcement Learning [62.23012916708608]
We introduce a new regularizer for empirical value functions and show that it lower bounds the Wasserstein distributionally robust value function.
It suggests using regularization as a practical tool for dealing with $textitexternal uncertainty$ in reinforcement learning.
arXiv Detail & Related papers (2020-03-05T19:56:23Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.