Structure-agnostic Optimality of Doubly Robust Learning for Treatment
Effect Estimation
- URL: http://arxiv.org/abs/2402.14264v2
- Date: Sat, 2 Mar 2024 02:00:58 GMT
- Title: Structure-agnostic Optimality of Doubly Robust Learning for Treatment
Effect Estimation
- Authors: Jikai Jin and Vasilis Syrgkanis
- Abstract summary: Average treatment effect estimation is the most central problem in causal inference with application to numerous disciplines.
We adopt the recently introduced structure-agnostic framework of statistical lower bounds, which poses no structural properties on the nuisance functions.
We prove the statistical optimality of the celebrated and widely used doubly robust estimators for both the Average Treatment Effect (ATE) and the Average Treatment Effect on the Treated (ATT)
- Score: 27.630223763160515
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Average treatment effect estimation is the most central problem in causal
inference with application to numerous disciplines. While many estimation
strategies have been proposed in the literature, the statistical optimality of
these methods has still remained an open area of investigation, especially in
regimes where these methods do not achieve parametric rates. In this paper, we
adopt the recently introduced structure-agnostic framework of statistical lower
bounds, which poses no structural properties on the nuisance functions other
than access to black-box estimators that achieve some statistical estimation
rate. This framework is particularly appealing when one is only willing to
consider estimation strategies that use non-parametric regression and
classification oracles as black-box sub-processes. Within this framework, we
prove the statistical optimality of the celebrated and widely used doubly
robust estimators for both the Average Treatment Effect (ATE) and the Average
Treatment Effect on the Treated (ATT), as well as weighted variants of the
former, which arise in policy evaluation.
Related papers
- Evaluating the Effectiveness of Index-Based Treatment Allocation [42.040099398176665]
When resources are scarce, an allocation policy is needed to decide who receives a resource.
This paper introduces methods to evaluate index-based allocation policies using data from a randomized control trial.
arXiv Detail & Related papers (2024-02-19T01:55:55Z) - Targeted Machine Learning for Average Causal Effect Estimation Using the
Front-Door Functional [3.0232957374216953]
evaluating the average causal effect (ACE) of a treatment on an outcome often involves overcoming the challenges posed by confounding factors in observational studies.
Here, we introduce novel estimation strategies for the front-door criterion based on the targeted minimum loss-based estimation theory.
We demonstrate the applicability of these estimators to analyze the effect of early stage academic performance on future yearly income.
arXiv Detail & Related papers (2023-12-15T22:04:53Z) - Distributional Off-Policy Evaluation for Slate Recommendations [19.22972996548473]
We propose an estimator for the complete off-policy performance distribution for slates.
We validate the efficacy of our method empirically on synthetic data as well as on a slate recommendation simulator constructed from real-world data.
arXiv Detail & Related papers (2023-08-27T17:58:32Z) - Uncertainty-Aware Instance Reweighting for Off-Policy Learning [63.31923483172859]
We propose a Uncertainty-aware Inverse Propensity Score estimator (UIPS) for improved off-policy learning.
Experiment results on synthetic and three real-world recommendation datasets demonstrate the advantageous sample efficiency of the proposed UIPS estimator.
arXiv Detail & Related papers (2023-03-11T11:42:26Z) - Systematic Evaluation of Predictive Fairness [60.0947291284978]
Mitigating bias in training on biased datasets is an important open problem.
We examine the performance of various debiasing methods across multiple tasks.
We find that data conditions have a strong influence on relative model performance.
arXiv Detail & Related papers (2022-10-17T05:40:13Z) - Assessment of Treatment Effect Estimators for Heavy-Tailed Data [70.72363097550483]
A central obstacle in the objective assessment of treatment effect (TE) estimators in randomized control trials (RCTs) is the lack of ground truth (or validation set) to test their performance.
We provide a novel cross-validation-like methodology to address this challenge.
We evaluate our methodology across 709 RCTs implemented in the Amazon supply chain.
arXiv Detail & Related papers (2021-12-14T17:53:01Z) - Stochastic Intervention for Causal Effect Estimation [7.015556609676951]
We propose a new propensity score and intervention effect estimator (SIE) to estimate intervention effect.
We also design a customized genetic algorithm specific to intervention effect (Ge-SIO) with the aim of providing causal evidence for decision making.
Our proposed measures and algorithms can achieve a significant performance lift in comparison with state-of-the-art baselines.
arXiv Detail & Related papers (2021-05-27T01:12:03Z) - Bootstrapping Statistical Inference for Off-Policy Evaluation [43.79456564713911]
We study the use of bootstrapping in off-policy evaluation (OPE)
We propose a bootstrapping FQE method for inferring the distribution of the policy evaluation error and show that this method is efficient and consistent for off-policy statistical inference.
We evaluate the bootrapping method in classical RL environments for confidence interval estimation, estimating the variance of off-policy evaluator, and estimating the correlation between multiple off-policy evaluators.
arXiv Detail & Related papers (2021-02-06T16:45:33Z) - Reliable Off-policy Evaluation for Reinforcement Learning [53.486680020852724]
In a sequential decision-making problem, off-policy evaluation estimates the expected cumulative reward of a target policy.
We propose a novel framework that provides robust and optimistic cumulative reward estimates using one or multiple logged data.
arXiv Detail & Related papers (2020-11-08T23:16:19Z) - Efficient Policy Learning from Surrogate-Loss Classification Reductions [65.91730154730905]
We consider the estimation problem given by a weighted surrogate-loss classification reduction of policy learning.
We show that, under a correct specification assumption, the weighted classification formulation need not be efficient for policy parameters.
We propose an estimation approach based on generalized method of moments, which is efficient for the policy parameters.
arXiv Detail & Related papers (2020-02-12T18:54:41Z) - Interpretable Off-Policy Evaluation in Reinforcement Learning by
Highlighting Influential Transitions [48.91284724066349]
Off-policy evaluation in reinforcement learning offers the chance of using observational data to improve future outcomes in domains such as healthcare and education.
Traditional measures such as confidence intervals may be insufficient due to noise, limited data and confounding.
We develop a method that could serve as a hybrid human-AI system, to enable human experts to analyze the validity of policy evaluation estimates.
arXiv Detail & Related papers (2020-02-10T00:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.