Trustworthy Few-Shot Transfer of Medical VLMs through Split Conformal Prediction
- URL: http://arxiv.org/abs/2506.17503v1
- Date: Fri, 20 Jun 2025 22:48:07 GMT
- Title: Trustworthy Few-Shot Transfer of Medical VLMs through Split Conformal Prediction
- Authors: Julio Silva-RodrÃguez, Ismail Ben Ayed, Jose Dolz,
- Abstract summary: Medical vision-language models (VLMs) have demonstrated unprecedented transfer capabilities and are being increasingly adopted for data-efficient image classification.<n>This work explores the split conformal prediction ( SCP) framework to provide trustworthiness guarantees when transferring such models.<n>We propose transductive split conformal adaptation (SCA-T), a novel pipeline for transfer learning on conformal scenarios.
- Score: 20.94974284175104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Medical vision-language models (VLMs) have demonstrated unprecedented transfer capabilities and are being increasingly adopted for data-efficient image classification. Despite its growing popularity, its reliability aspect remains largely unexplored. This work explores the split conformal prediction (SCP) framework to provide trustworthiness guarantees when transferring such models based on a small labeled calibration set. Despite its potential, the generalist nature of the VLMs' pre-training could negatively affect the properties of the predicted conformal sets for specific tasks. While common practice in transfer learning for discriminative purposes involves an adaptation stage, we observe that deploying such a solution for conformal purposes is suboptimal since adapting the model using the available calibration data breaks the rigid exchangeability assumptions for test data in SCP. To address this issue, we propose transductive split conformal adaptation (SCA-T), a novel pipeline for transfer learning on conformal scenarios, which performs an unsupervised transductive adaptation jointly on calibration and test data. We present comprehensive experiments utilizing medical VLMs across various image modalities, transfer tasks, and non-conformity scores. Our framework offers consistent gains in efficiency and conditional coverage compared to SCP, maintaining the same empirical guarantees.
Related papers
- Full Conformal Adaptation of Medical Vision-Language Models [17.53651859360999]
Vision-language models (VLMs) pre-trained at large scale have shown unprecedented transferability capabilities.<n>This work investigates their behavior under the increasingly popular split conformal prediction framework.<n>We propose full conformal adaptation, a novel setting for jointly adapting and conformalizing pre-trained foundation models.
arXiv Detail & Related papers (2025-06-06T13:32:00Z) - Conformal Prediction for Zero-Shot Models [20.94974284175104]
We investigate the capabilities of CLIP models under the split conformal prediction paradigm.<n>We propose Conf-OT, a transfer learning setting that operates transductive over the combined calibration and query sets.
arXiv Detail & Related papers (2025-05-30T15:16:19Z) - WQLCP: Weighted Adaptive Conformal Prediction for Robust Uncertainty Quantification Under Distribution Shifts [4.192712667327956]
We introduce reconstruction losses derived from a Variational Autoencoder (VAE) as an uncertainty metric to scale score functions.<n>We propose weighted Quantile Loss-scaled Conformal Prediction (WQLCP) which refines RL SCP by incorporating a weighted notion of exchangeability.
arXiv Detail & Related papers (2025-05-26T07:00:15Z) - Conformal Uncertainty Indicator for Continual Test-Time Adaptation [16.248749460383227]
We propose a Conformal Uncertainty Indicator (CUI) for Continual Test-Time Adaptation (CTTA)<n>We leverage Conformal Prediction (CP) to generate prediction sets that include the true label with a specified coverage probability.<n>Experiments confirm that CUI effectively estimates uncertainty and improves adaptation performance across various existing CTTA methods.
arXiv Detail & Related papers (2025-02-05T08:47:18Z) - Noise-Adaptive Conformal Classification with Marginal Coverage [53.74125453366155]
We introduce an adaptive conformal inference method capable of efficiently handling deviations from exchangeability caused by random label noise.<n>We validate our method through extensive numerical experiments demonstrating its effectiveness on synthetic and real data sets.
arXiv Detail & Related papers (2025-01-29T23:55:23Z) - Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.<n>Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.<n>We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.