Combining T-learning and DR-learning: a framework for oracle-efficient
estimation of causal contrasts
- URL: http://arxiv.org/abs/2402.01972v1
- Date: Sat, 3 Feb 2024 00:47:50 GMT
- Title: Combining T-learning and DR-learning: a framework for oracle-efficient
estimation of causal contrasts
- Authors: Lars van der Laan, Marco Carone, Alex Luedtke
- Abstract summary: We introduce efficient plug-in (EP) learning, a novel framework for the estimation of heterogeneous causal contrasts.
EP-learners of the conditional average treatment and conditional relative risk outperform state-of-the-art competitors.
- Score: 1.0896141997814233
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce efficient plug-in (EP) learning, a novel framework for the
estimation of heterogeneous causal contrasts, such as the conditional average
treatment effect and conditional relative risk. The EP-learning framework
enjoys the same oracle-efficiency as Neyman-orthogonal learning strategies,
such as DR-learning and R-learning, while addressing some of their primary
drawbacks, including that (i) their practical applicability can be hindered by
loss function non-convexity; and (ii) they may suffer from poor performance and
instability due to inverse probability weighting and pseudo-outcomes that
violate bounds. To avoid these drawbacks, EP-learner constructs an efficient
plug-in estimator of the population risk function for the causal contrast,
thereby inheriting the stability and robustness properties of plug-in
estimation strategies like T-learning. Under reasonable conditions, EP-learners
based on empirical risk minimization are oracle-efficient, exhibiting
asymptotic equivalence to the minimizer of an oracle-efficient one-step
debiased estimator of the population risk function. In simulation experiments,
we illustrate that EP-learners of the conditional average treatment effect and
conditional relative risk outperform state-of-the-art competitors, including
T-learner, R-learner, and DR-learner. Open-source implementations of the
proposed methods are available in our R package hte3.
Related papers
- C-Learner: Constrained Learning for Causal Inference and Semiparametric Statistics [5.395560682099634]
We propose a novel debiased estimator that achieves stable plug-in estimates with desirable properties.
Our constrained learning framework solves for the best plug-in estimator under the constraint that the first-order error with respect to the plugged-in quantity is zero.
Our estimator outperforms one-step estimation and targeting in challenging settings with limited overlap between treatment and control, and performs comparably otherwise.
arXiv Detail & Related papers (2024-05-15T16:38:28Z) - Longitudinal Targeted Minimum Loss-based Estimation with Temporal-Difference Heterogeneous Transformer [7.451436112917229]
We propose a novel approach to estimate the counterfactual mean of outcome under dynamic treatment policies in longitudinal problem settings.
Our approach utilizes a transformer architecture with heterogeneous type embedding trained using temporal-difference learning.
Our method also facilitates statistical inference by enabling the provision of 95% confidence intervals grounded in statistical theory.
arXiv Detail & Related papers (2024-04-05T20:56:15Z) - Safe Deployment for Counterfactual Learning to Rank with Exposure-Based
Risk Minimization [63.93275508300137]
We introduce a novel risk-aware Counterfactual Learning To Rank method with theoretical guarantees for safe deployment.
Our experimental results demonstrate the efficacy of our proposed method, which is effective at avoiding initial periods of bad performance when little data is available.
arXiv Detail & Related papers (2023-04-26T15:54:23Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Proximal Causal Learning of Conditional Average Treatment Effects [0.0]
We propose a tailored two-stage loss function for learning heterogeneous treatment effects.
Our proposed estimator can be implemented by off-the-shelf loss-minimizing machine learning methods.
arXiv Detail & Related papers (2023-01-26T02:56:36Z) - Treatment Effect Risk: Bounds and Inference [58.442274475425144]
Since the average treatment effect measures the change in social welfare, even if positive, there is a risk of negative effect on, say, some 10% of the population.
In this paper we consider how to nonetheless assess this important risk measure, formalized as the conditional value at risk (CVaR) of the ITE distribution.
Some bounds can also be interpreted as summarizing a complex CATE function into a single metric and are of interest independently of being a bound.
arXiv Detail & Related papers (2022-01-15T17:21:26Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Estimating heterogeneous survival treatment effect in observational data
using machine learning [9.951103976634407]
Methods for estimating heterogeneous treatment effect in observational data have largely focused on continuous or binary outcomes.
Using flexible machine learning methods in the counterfactual framework is a promising approach to address challenges due to complex individual characteristics.
arXiv Detail & Related papers (2020-08-17T01:02:14Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.