Adaptive Conformal Prediction via Bayesian Uncertainty Weighting for Hierarchical Healthcare Data
- URL: http://arxiv.org/abs/2601.01223v1
- Date: Sat, 03 Jan 2026 16:06:37 GMT
- Title: Adaptive Conformal Prediction via Bayesian Uncertainty Weighting for Hierarchical Healthcare Data
- Authors: Marzieh Amiri Shahbazi, Ali Baheri, Nasibeh Azadeh-Fard,
- Abstract summary: We present a hybrid Bayesian-conformal framework that addresses the fundamental limitation in healthcare predictions.<n>Our approach integrates Bayesian hierarchical random forests with group-aware conformal calibration, using posterior uncertainties to weight conformity scores.<n>We evaluate our method on 61,538 admissions across 3,793 U.S. hospitals and 4 regions.
- Score: 2.922743999325622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Clinical decision-making demands uncertainty quantification that provides both distribution-free coverage guarantees and risk-adaptive precision, requirements that existing methods fail to jointly satisfy. We present a hybrid Bayesian-conformal framework that addresses this fundamental limitation in healthcare predictions. Our approach integrates Bayesian hierarchical random forests with group-aware conformal calibration, using posterior uncertainties to weight conformity scores while maintaining rigorous coverage validity. Evaluated on 61,538 admissions across 3,793 U.S. hospitals and 4 regions, our method achieves target coverage (94.3% vs 95% target) with adaptive precision: 21% narrower intervals for low-uncertainty cases while appropriately widening for high-risk predictions. Critically, we demonstrate that well-calibrated Bayesian uncertainties alone severely under-cover (14.1%), highlighting the necessity of our hybrid approach. This framework enables risk-stratified clinical protocols, efficient resource planning for high-confidence predictions, and conservative allocation with enhanced oversight for uncertain cases, providing uncertainty-aware decision support across diverse healthcare settings.
Related papers
- Calibrated Bayesian Deep Learning for Explainable Decision Support Systems Based on Medical Imaging [6.826979426009301]
It is imperative that models quantify uncertainty in a manner that correlates with prediction correctness, allowing clinicians to identify unreliable outputs for further review.<n>The present paper proposes a generalizable probabilistic optimization framework grounded in Bayesian deep learning.<n>Specifically, a novel Confidence-Uncertainty Boundary Loss (CUB-Loss) is introduced that imposes penalties on high-certainty errors and low-certainty correct predictions.<n>The proposed framework is validated on three distinct medical imaging tasks: automatic screening of pneumonia, diabetic retinopathy detection, and identification of skin lesions.
arXiv Detail & Related papers (2026-02-12T14:03:41Z) - Conditional Coverage Diagnostics for Conformal Prediction [47.93989136542648]
We show that conditional coverage estimation can be a classification problem.<n>We call the resulting family of metrics excess risk of the target coverage (ERT)<n>We release an open-source package for ERT as well as previous conditional coverage metrics.
arXiv Detail & Related papers (2025-12-12T18:47:39Z) - Uncertainty-Calibrated Prediction of Randomly-Timed Biomarker Trajectories with Conformal Bands [24.335811693519165]
We introduce a conformal method for uncertainty-calibrated prediction of biomarker trajectories from real clinical data.<n>Our approach extends conformal prediction to the setting of randomly-timed trajectories via a novel non-conformity score.<n>We demonstrate the clinical utility of our conformal bands in identifying subjects at high risk of progression to Alzheimer's disease.
arXiv Detail & Related papers (2025-11-17T21:04:14Z) - When Robustness Meets Conservativeness: Conformalized Uncertainty Calibration for Balanced Decision Making [8.234618636958462]
We propose a new framework that provides distribution-free, finite-sample guarantees on miscoverage and regret.<n>Our method constructs valid estimators that trace out the mis-regret frontier.<n>These results offer the first principled data-driven methodology for guiding robustness selection.
arXiv Detail & Related papers (2025-10-09T03:38:17Z) - COIN: Uncertainty-Guarding Selective Question Answering for Foundation Models with Provable Risk Guarantees [51.5976496056012]
COIN is an uncertainty-guarding selection framework that calibrates statistically valid thresholds to filter a single generated answer per question.<n>COIN estimates the empirical error rate on a calibration set and applies confidence interval methods to establish a high-probability upper bound on the true error rate.<n>We demonstrate COIN's robustness in risk control, strong test-time power in retaining admissible answers, and predictive efficiency under limited calibration data.
arXiv Detail & Related papers (2025-06-25T07:04:49Z) - Risk-Sensitive Conformal Prediction for Catheter Placement Detection in Chest X-rays [0.0]
This paper presents a novel approach to catheter and line position detection in chest X-rays.<n>Our model simultaneously performs classification, segmentation, and landmark detection.<n>Risk-sensitive conformal prediction provides statistically guaranteed prediction sets with higher reliability for clinically critical findings.
arXiv Detail & Related papers (2025-05-28T15:47:10Z) - SConU: Selective Conformal Uncertainty in Large Language Models [59.25881667640868]
We propose a novel approach termed Selective Conformal Uncertainty (SConU)<n>We develop two conformal p-values that are instrumental in determining whether a given sample deviates from the uncertainty distribution of the calibration set at a specific manageable risk level.<n>Our approach not only facilitates rigorous management of miscoverage rates across both single-domain and interdisciplinary contexts, but also enhances the efficiency of predictions.
arXiv Detail & Related papers (2025-04-19T03:01:45Z) - Beyond Confidence: Adaptive Abstention in Dual-Threshold Conformal Prediction for Autonomous System Perception [0.4124847249415279]
Safety-critical perception systems require reliable uncertainty quantification and principled abstention mechanisms to maintain safety.<n>We present a novel dual-threshold conformalization framework that provides statistically-guaranteed uncertainty estimates while enabling selective prediction in high-risk scenarios.
arXiv Detail & Related papers (2025-02-11T04:45:31Z) - Conformal Prediction Sets with Improved Conditional Coverage using Trust Scores [52.92618442300405]
It is impossible to achieve exact, distribution-free conditional coverage in finite samples.<n>We propose an alternative conformal prediction algorithm that targets coverage where it matters most.
arXiv Detail & Related papers (2025-01-17T12:01:56Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Towards Reliable Medical Image Segmentation by Modeling Evidential Calibrated Uncertainty [57.023423137202485]
Concerns regarding the reliability of medical image segmentation persist among clinicians.<n>We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.<n>By leveraging subjective logic theory, we explicitly model probability and uncertainty for medical image segmentation.
arXiv Detail & Related papers (2023-01-01T05:02:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.