Multiply Robust Conformal Risk Control with Coarsened Data
- URL: http://arxiv.org/abs/2508.15489v1
- Date: Thu, 21 Aug 2025 12:14:44 GMT
- Title: Multiply Robust Conformal Risk Control with Coarsened Data
- Authors: Manit Paul, Arun Kumar Kuchibhotla, Eric J. Tchetgen Tchetgen,
- Abstract summary: Conformal Prediction (CP) has recently received a tremendous amount of interest.<n>In this paper, we consider the general problem of obtaining distribution-free valid prediction regions for an outcome given coarsened data.<n>Our principled use of semiparametric theory has the key advantage of facilitating flexible machine learning methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conformal Prediction (CP) has recently received a tremendous amount of interest, leading to a wide range of new theoretical and methodological results for predictive inference with formal theoretical guarantees. However, the vast majority of CP methods assume that all units in the training data have fully observed data on both the outcome and covariates of primary interest, an assumption that rarely holds in practice. In reality, training data are often missing the outcome, a subset of covariates, or both on some units. In addition, time-to-event outcomes in the training set may be censored due to dropout or administrative end-of-follow-up. Accurately accounting for such coarsened data in the training sample while fulfilling the primary objective of well-calibrated conformal predictive inference, requires robustness and efficiency considerations. In this paper, we consider the general problem of obtaining distribution-free valid prediction regions for an outcome given coarsened training data. Leveraging modern semiparametric theory, we achieve our goal by deriving the efficient influence function of the quantile of the outcome we aim to predict, under a given semiparametric model for the coarsened data, carefully combined with a novel conformal risk control procedure. Our principled use of semiparametric theory has the key advantage of facilitating flexible machine learning methods such as random forests to learn the underlying nuisance functions of the semiparametric model. A straightforward application of the proposed general framework produces prediction intervals with stronger coverage properties under covariate shift, as well as the construction of multiply robust prediction sets in monotone missingness scenarios. We further illustrate the performance of our methods through various simulation studies.
Related papers
- The Coverage Principle: How Pre-Training Enables Post-Training [70.25788947586297]
We study how pre-training shapes the success of the final model.<n>We uncover a mechanism that explains the power of coverage in predicting downstream performance.
arXiv Detail & Related papers (2025-10-16T17:53:50Z) - A Unified Framework for Inference with General Missingness Patterns and Machine Learning Imputation [12.817707155207817]
This paper develops a novel method which delivers valid statistical inference framework for general Z-estimation problems.<n>We provide theoretical guarantees of normality of the proposed estimator and efficiency dominance over weighted complete-case analyses.
arXiv Detail & Related papers (2025-08-21T01:59:59Z) - Predictions as Surrogates: Revisiting Surrogate Outcomes in the Age of AI [12.569286058146343]
We establish a formal connection between the decades-old surrogate outcome model in biostatistics and the emerging field of prediction-powered inference (PPI)<n>We develop recalibrated prediction-powered inference, a more efficient approach to statistical inference than existing PPI proposals.<n>We demonstrate significant gains in effective sample size over existing PPI proposals via three applications leveraging state-of-the-art machine learning/AI models.
arXiv Detail & Related papers (2025-01-16T18:30:33Z) - Ranking and Combining Latent Structured Predictive Scores without Labeled Data [2.5064967708371553]
This paper introduces a novel structured unsupervised ensemble learning model (SUEL)
It exploits the dependency between a set of predictors with continuous predictive scores, rank the predictors without labeled data and combine them to an ensembled score with weights.
The efficacy of the proposed methods is rigorously assessed through both simulation studies and real-world application of risk genes discovery.
arXiv Detail & Related papers (2024-08-14T20:14:42Z) - Boosted Control Functions: Distribution generalization and invariance in confounded models [10.503777692702952]
We introduce a strong notion of invariance that allows for distribution generalization even in the presence of nonlinear, non-identifiable structural functions.<n>We propose the ControlTwicing algorithm to estimate the Boosted Control Function (BCF) using flexible machine-learning techniques.
arXiv Detail & Related papers (2023-10-09T15:43:46Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Causality-oriented robustness: exploiting general noise interventions [4.64479351797195]
In this paper, we focus on causality-oriented robustness and propose Distributional Robustness via Invariant Gradients (DRIG)<n>DRIG exploits general noise interventions in training data for robust predictions against unseen interventions.<n>We show that our framework includes anchor regression as a special case, and that it yields prediction models that protect against more diverse perturbations.
arXiv Detail & Related papers (2023-07-18T16:22:50Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Adapting to Continuous Covariate Shift via Online Density Ratio Estimation [64.8027122329609]
Dealing with distribution shifts is one of the central challenges for modern machine learning.
We propose an online method that can appropriately reuse historical information.
Our density ratio estimation method is proven to perform well by enjoying a dynamic regret bound.
arXiv Detail & Related papers (2023-02-06T04:03:33Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Counterfactual Maximum Likelihood Estimation for Training Deep Networks [83.44219640437657]
Deep learning models are prone to learning spurious correlations that should not be learned as predictive clues.
We propose a causality-based training framework to reduce the spurious correlations caused by observable confounders.
We conduct experiments on two real-world tasks: Natural Language Inference (NLI) and Image Captioning.
arXiv Detail & Related papers (2021-06-07T17:47:16Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Robust Validation: Confident Predictions Even When Distributions Shift [19.327409270934474]
We describe procedures for robust predictive inference, where a model provides uncertainty estimates on its predictions rather than point predictions.
We present a method that produces prediction sets (almost exactly) giving the right coverage level for any test distribution in an $f$-divergence ball around the training population.
An essential component of our methodology is to estimate the amount of expected future data shift and build robustness to it.
arXiv Detail & Related papers (2020-08-10T17:09:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.