Conformal Convolution and Monte Carlo Meta-learners for Predictive Inference of Individual Treatment Effects
- URL: http://arxiv.org/abs/2402.04906v4
- Date: Wed, 12 Jun 2024 12:35:14 GMT
- Title: Conformal Convolution and Monte Carlo Meta-learners for Predictive Inference of Individual Treatment Effects
- Authors: Jef Jonkers, Jarne Verhaeghe, Glenn Van Wallendael, Luc Duchateau, Sofie Van Hoecke,
- Abstract summary: Conformal convolution T-learner (CCT-learner) and conformal Monte Carlo (CMC) meta-learners are presented.
The approaches leverage weighted conformal predictive systems (WCPS), Monte Carlo sampling, and CATE meta-learners.
They generate predictive distributions of individual treatment effect (ITE) that could enhance individualized decision-making.
- Score: 2.7320409129940684
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge of the effect of interventions, known as the treatment effect, is paramount for decision-making. Approaches to estimating this treatment effect using conditional average treatment effect (CATE) meta-learners often provide only a point estimate of this treatment effect, while additional uncertainty quantification is frequently desired to enhance decision-making confidence. To address this, we introduce two novel approaches: the conformal convolution T-learner (CCT-learner) and conformal Monte Carlo (CMC) meta-learners. The approaches leverage weighted conformal predictive systems (WCPS), Monte Carlo sampling, and CATE meta-learners to generate predictive distributions of individual treatment effect (ITE) that could enhance individualized decision-making. Although we show how assumptions about the noise distribution of the outcome influence the uncertainty predictions, our experiments demonstrate that the CCT- and CMC meta-learners achieve strong coverage while maintaining narrow interval widths. They also generate probabilistically calibrated predictive distributions, providing reliable ranges of ITEs across various synthetic and semi-synthetic datasets. Code: https://github.com/predict-idlab/cct-cmc
Related papers
- Deep Evidential Learning for Radiotherapy Dose Prediction [0.0]
We present a novel application of an uncertainty-quantification framework called Deep Evidential Learning in the domain of radiotherapy dose prediction.
We found that this model can be effectively harnessed to yield uncertainty estimates that inherited correlations with prediction errors upon completion of network training.
arXiv Detail & Related papers (2024-04-26T02:43:45Z) - Conformal Meta-learners for Predictive Inference of Individual Treatment
Effects [0.0]
We investigate the problem of machine learning-based (ML) predictive inference on individual treatment effects (ITEs)
We develop conformal meta-learners, a general framework for issuing predictive intervals for ITEs by applying the standard conformal prediction (CP) procedure on top of CATE meta-learners.
arXiv Detail & Related papers (2023-08-28T20:32:22Z) - Conformal Prediction for Federated Uncertainty Quantification Under
Label Shift [57.54977668978613]
Federated Learning (FL) is a machine learning framework where many clients collaboratively train models.
We develop a new conformal prediction method based on quantile regression and take into account privacy constraints.
arXiv Detail & Related papers (2023-06-08T11:54:58Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Proximal Causal Learning of Conditional Average Treatment Effects [0.0]
We propose a tailored two-stage loss function for learning heterogeneous treatment effects.
Our proposed estimator can be implemented by off-the-shelf loss-minimizing machine learning methods.
arXiv Detail & Related papers (2023-01-26T02:56:36Z) - Comparison of meta-learners for estimating multi-valued treatment
heterogeneous effects [2.294014185517203]
Conditional Average Treatment Effects (CATE) estimation is one of the main challenges in causal inference with observational data.
Nonparametric estimators called meta-learners have been developed to estimate the CATE with the main advantage of not restraining the estimation to a specific supervised learning method.
This paper looks into meta-learners for estimating the heterogeneous effects of multi-valued treatments.
arXiv Detail & Related papers (2022-05-29T16:46:21Z) - Robust and Agnostic Learning of Conditional Distributional Treatment
Effects [62.44901952244514]
The conditional average treatment effect (CATE) is the best point prediction of individual causal effects.
In aggregate analyses, this is usually addressed by measuring distributional treatment effect (DTE)
We provide a new robust and model-agnostic methodology for learning the conditional DTE (CDTE) for a wide class of problems.
arXiv Detail & Related papers (2022-05-23T17:40:31Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees [77.67258935234403]
We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning.
We develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-level regularization.
arXiv Detail & Related papers (2020-02-13T15:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.