Conformal Meta-learners for Predictive Inference of Individual Treatment
Effects
- URL: http://arxiv.org/abs/2308.14895v1
- Date: Mon, 28 Aug 2023 20:32:22 GMT
- Title: Conformal Meta-learners for Predictive Inference of Individual Treatment
Effects
- Authors: Ahmed Alaa, Zaid Ahmad, Mark van der Laan
- Abstract summary: We investigate the problem of machine learning-based (ML) predictive inference on individual treatment effects (ITEs)
We develop conformal meta-learners, a general framework for issuing predictive intervals for ITEs by applying the standard conformal prediction (CP) procedure on top of CATE meta-learners.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the problem of machine learning-based (ML) predictive
inference on individual treatment effects (ITEs). Previous work has focused
primarily on developing ML-based meta-learners that can provide point estimates
of the conditional average treatment effect (CATE); these are model-agnostic
approaches for combining intermediate nuisance estimates to produce estimates
of CATE. In this paper, we develop conformal meta-learners, a general framework
for issuing predictive intervals for ITEs by applying the standard conformal
prediction (CP) procedure on top of CATE meta-learners. We focus on a broad
class of meta-learners based on two-stage pseudo-outcome regression and develop
a stochastic ordering framework to study their validity. We show that inference
with conformal meta-learners is marginally valid if their (pseudo outcome)
conformity scores stochastically dominate oracle conformity scores evaluated on
the unobserved ITEs. Additionally, we prove that commonly used CATE
meta-learners, such as the doubly-robust learner, satisfy a model- and
distribution-free stochastic (or convex) dominance condition, making their
conformal inferences valid for practically-relevant levels of target coverage.
Whereas existing procedures conduct inference on nuisance parameters (i.e.,
potential outcomes) via weighted CP, conformal meta-learners enable direct
inference on the target parameter (ITE). Numerical experiments show that
conformal meta-learners provide valid intervals with competitive efficiency
while retaining the favorable point estimation properties of CATE
meta-learners.
Related papers
- Towards Robust and Interpretable EMG-based Hand Gesture Recognition using Deep Metric Meta Learning [37.21211404608413]
We propose a shift to deep metric-based meta-learning in EMG PR to supervise the creation of meaningful and interpretable representations.
We derive a robust class proximity-based confidence estimator that leads to a better rejection of incorrect decisions.
arXiv Detail & Related papers (2024-04-17T23:37:50Z) - Conformal Convolution and Monte Carlo Meta-learners for Predictive Inference of Individual Treatment Effects [2.7320409129940684]
Conformal convolution T-learner (CCT-learner) and conformal Monte Carlo (CMC) meta-learners are presented.
The approaches leverage weighted conformal predictive systems (WCPS), Monte Carlo sampling, and CATE meta-learners.
They generate predictive distributions of individual treatment effect (ITE) that could enhance individualized decision-making.
arXiv Detail & Related papers (2024-02-07T14:35:25Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior:
From Theory to Practice [54.03076395748459]
A central question in the meta-learning literature is how to regularize to ensure generalization to unseen tasks.
We present a generalization bound for meta-learning, which was first derived by Rothfuss et al.
We provide a theoretical analysis and empirical case study under which conditions and to what extent these guarantees for meta-learning improve upon PAC-Bayesian per-task learning bounds.
arXiv Detail & Related papers (2022-11-14T08:51:04Z) - An Investigation of the Bias-Variance Tradeoff in Meta-Gradients [53.28925387487846]
Hessian estimation always adds bias and can also add variance to meta-gradient estimation.
We study the bias and variance tradeoff arising from truncated backpropagation and sampling correction.
arXiv Detail & Related papers (2022-09-22T20:33:05Z) - Comparison of meta-learners for estimating multi-valued treatment
heterogeneous effects [2.294014185517203]
Conditional Average Treatment Effects (CATE) estimation is one of the main challenges in causal inference with observational data.
Nonparametric estimators called meta-learners have been developed to estimate the CATE with the main advantage of not restraining the estimation to a specific supervised learning method.
This paper looks into meta-learners for estimating the heterogeneous effects of multi-valued treatments.
arXiv Detail & Related papers (2022-05-29T16:46:21Z) - Meta-Learning Reliable Priors in the Function Space [36.869587157481284]
We introduce a novel meta-learning framework, called F-PACOH, that treats meta-learned priors as processes and performs meta-level regularization directly in the function space.
This allows us to directly steer the predictions of the meta-learner towards high uncertainty in regions of insufficient meta-training data and, thus, obtain well-calibrated uncertainty estimates.
arXiv Detail & Related papers (2021-06-06T18:07:49Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z) - PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees [77.67258935234403]
We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning.
We develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-level regularization.
arXiv Detail & Related papers (2020-02-13T15:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.