Doubly Robust Proximal Causal Learning for Continuous Treatments
- URL: http://arxiv.org/abs/2309.12819v3
- Date: Mon, 11 Mar 2024 03:09:55 GMT
- Title: Doubly Robust Proximal Causal Learning for Continuous Treatments
- Authors: Yong Wu, Yanwei Fu, Shouyan Wang, Xinwei Sun
- Abstract summary: We propose a kernel-based doubly robust causal learning estimator for continuous treatments.
We show that its oracle form is a consistent approximation of the influence function.
We then provide a comprehensive convergence analysis in terms of the mean square error.
- Score: 56.05592840537398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Proximal causal learning is a promising framework for identifying the causal
effect under the existence of unmeasured confounders. Within this framework,
the doubly robust (DR) estimator was derived and has shown its effectiveness in
estimation, especially when the model assumption is violated. However, the
current form of the DR estimator is restricted to binary treatments, while the
treatment can be continuous in many real-world applications. The primary
obstacle to continuous treatments resides in the delta function present in the
original DR estimator, making it infeasible in causal effect estimation and
introducing a heavy computational burden in nuisance function estimation. To
address these challenges, we propose a kernel-based DR estimator that can well
handle continuous treatments. Equipped with its smoothness, we show that its
oracle form is a consistent approximation of the influence function. Further,
we propose a new approach to efficiently solve the nuisance functions. We then
provide a comprehensive convergence analysis in terms of the mean square error.
We demonstrate the utility of our estimator on synthetic datasets and
real-world applications.
Related papers
- Automatic doubly robust inference for linear functionals via calibrated debiased machine learning [0.9694940903078658]
We propose a debiased machine learning estimator for doubly robust inference.
A C-DML estimator maintains linearity when either the outcome regression or the Riesz representer of the linear functional is estimated sufficiently well.
Our theoretical and empirical results support the use of C-DML to mitigate bias arising from the inconsistent or slow estimation of nuisance functions.
arXiv Detail & Related papers (2024-11-05T03:32:30Z) - Learning Representations of Instruments for Partial Identification of Treatment Effects [23.811079163083303]
We leverage arbitrary (potentially high-dimensional) instruments to estimate bounds on the conditional average treatment effect (CATE)
We propose a novel approach for partial identification through a mapping of instruments to a discrete representation space.
We derive a two-step procedure that learns tight bounds using a tailored neural partitioning of the latent instrument space.
arXiv Detail & Related papers (2024-10-11T16:48:32Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Proximal Causal Learning of Conditional Average Treatment Effects [0.0]
We propose a tailored two-stage loss function for learning heterogeneous treatment effects.
Our proposed estimator can be implemented by off-the-shelf loss-minimizing machine learning methods.
arXiv Detail & Related papers (2023-01-26T02:56:36Z) - Doubly Robust Distributionally Robust Off-Policy Evaluation and Learning [59.02006924867438]
Off-policy evaluation and learning (OPE/L) use offline observational data to make better decisions.
Recent work proposed distributionally robust OPE/L (DROPE/L) to remedy this, but the proposal relies on inverse-propensity weighting.
We propose the first DR algorithms for DROPE/L with KL-divergence uncertainty sets.
arXiv Detail & Related papers (2022-02-19T20:00:44Z) - Assessment of Treatment Effect Estimators for Heavy-Tailed Data [70.72363097550483]
A central obstacle in the objective assessment of treatment effect (TE) estimators in randomized control trials (RCTs) is the lack of ground truth (or validation set) to test their performance.
We provide a novel cross-validation-like methodology to address this challenge.
We evaluate our methodology across 709 RCTs implemented in the Amazon supply chain.
arXiv Detail & Related papers (2021-12-14T17:53:01Z) - Multiply Robust Causal Mediation Analysis with Continuous Treatments [12.196869756333797]
We propose an estimator suitable for settings with continuous treatments inspired by the influence function-based estimator of Tchetgen Tchetgen and Shpitser (2012)
Our proposed approach employs cross-fitting, relaxing the smoothness requirements on the nuisance functions and allowing them to be estimated at slower rates than the target parameter.
arXiv Detail & Related papers (2021-05-19T16:58:57Z) - Causal Estimation with Functional Confounders [24.54466899641308]
Causal inference relies on two fundamental assumptions: ignorability and positivity.
We study causal inference when the true confounder value can be expressed as a function of the observed data.
In this setting, ignorability is satisfied, however positivity is violated, and causal inference is impossible in general.
arXiv Detail & Related papers (2021-02-17T02:16:21Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.