LATA: Laplacian-Assisted Transductive Adaptation for Conformal Uncertainty in Medical VLMs
- URL: http://arxiv.org/abs/2602.17535v1
- Date: Thu, 19 Feb 2026 16:45:38 GMT
- Title: LATA: Laplacian-Assisted Transductive Adaptation for Conformal Uncertainty in Medical VLMs
- Authors: Behzad Bozorgtabar, Dwarikanath Mahapatra, Sudipta Roy, Muzammal Naseer, Imran Razzak, Zongyuan Ge,
- Abstract summary: Medical vision-language models (VLMs) are strong zero-shot recognizers for medical imaging.<n>We propose texttttextbfLATA (Laplacian-Assisted Transductive Adaptation), a textittraining- and label-free refinement.<n>texttttextbfLATA sharpens zero-shot predictions without compromising exchangeability.
- Score: 61.06744611795341
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical vision-language models (VLMs) are strong zero-shot recognizers for medical imaging, but their reliability under domain shift hinges on calibrated uncertainty with guarantees. Split conformal prediction (SCP) offers finite-sample coverage, yet prediction sets often become large (low efficiency) and class-wise coverage unbalanced-high class-conditioned coverage gap (CCV), especially in few-shot, imbalanced regimes; moreover, naively adapting to calibration labels breaks exchangeability and voids guarantees. We propose \texttt{\textbf{LATA}} (Laplacian-Assisted Transductive Adaptation), a \textit{training- and label-free} refinement that operates on the joint calibration and test pool by smoothing zero-shot probabilities over an image-image k-NN graph using a small number of CCCP mean-field updates, preserving SCP validity via a deterministic transform. We further introduce a \textit{failure-aware} conformal score that plugs into the vision-language uncertainty (ViLU) framework, providing instance-level difficulty and label plausibility to improve prediction set efficiency and class-wise balance at fixed coverage. \texttt{\textbf{LATA}} is black-box (no VLM updates), compute-light (windowed transduction, no backprop), and includes an optional prior knob that can run strictly label-free or, if desired, in a label-informed variant using calibration marginals once. Across \textbf{three} medical VLMs and \textbf{nine} downstream tasks, \texttt{\textbf{LATA}} consistently reduces set size and CCV while matching or tightening target coverage, outperforming prior transductive baselines and narrowing the gap to label-using methods, while using far less compute. Comprehensive ablations and qualitative analyses show that \texttt{\textbf{LATA}} sharpens zero-shot predictions without compromising exchangeability.
Related papers
- Coverage Guarantees for Pseudo-Calibrated Conformal Prediction under Distribution Shift [1.5861469511290378]
Conformal prediction offers marginal coverage guarantees if the data distribution shifts.<n>We analyze the use of pseudo-calibration as a tool to counter this performance loss.<n>We propose a source-tuned pseudo-calibration algorithm that interpolates between hard pseudo-labels and randomized labels.
arXiv Detail & Related papers (2026-02-16T16:48:39Z) - Conditional Coverage Diagnostics for Conformal Prediction [47.93989136542648]
We show that conditional coverage estimation can be a classification problem.<n>We call the resulting family of metrics excess risk of the target coverage (ERT)<n>We release an open-source package for ERT as well as previous conditional coverage metrics.
arXiv Detail & Related papers (2025-12-12T18:47:39Z) - Enhancing CLIP Robustness via Cross-Modality Alignment [54.01929554563447]
We propose Cross-modality Alignment, an optimal transport-based framework for vision-language models.<n> COLA restores global image-text alignment and local structural consistency in the feature space.<n> COLA is training-free and compatible with existing fine-tuned models.
arXiv Detail & Related papers (2025-10-28T03:47:44Z) - Unsupervised Conformal Inference: Bootstrapping and Alignment to Control LLM Uncertainty [49.19257648205146]
We propose an unsupervised conformal inference framework for generation.<n>Our gates achieve close-to-nominal coverage and provide tighter, more stable thresholds than split UCP.<n>The result is a label-free, API-compatible gate for test-time filtering.
arXiv Detail & Related papers (2025-09-26T23:40:47Z) - COIN: Uncertainty-Guarding Selective Question Answering for Foundation Models with Provable Risk Guarantees [51.5976496056012]
COIN is an uncertainty-guarding selection framework that calibrates statistically valid thresholds to filter a single generated answer per question.<n>COIN estimates the empirical error rate on a calibration set and applies confidence interval methods to establish a high-probability upper bound on the true error rate.<n>We demonstrate COIN's robustness in risk control, strong test-time power in retaining admissible answers, and predictive efficiency under limited calibration data.
arXiv Detail & Related papers (2025-06-25T07:04:49Z) - Trustworthy Few-Shot Transfer of Medical VLMs through Split Conformal Prediction [20.94974284175104]
Medical vision-language models (VLMs) have demonstrated unprecedented transfer capabilities and are being increasingly adopted for data-efficient image classification.<n>This work explores the split conformal prediction ( SCP) framework to provide trustworthiness guarantees when transferring such models.<n>We propose transductive split conformal adaptation (SCA-T), a novel pipeline for transfer learning on conformal scenarios.
arXiv Detail & Related papers (2025-06-20T22:48:07Z) - Semi-Supervised Conformal Prediction With Unlabeled Nonconformity Score [19.15617038007535]
Conformal prediction (CP) is a powerful framework for uncertainty quantification.<n>In real-world applications where labeled data is often limited, standard CP can lead to coverage deviation and output overly large prediction sets.<n>We propose SemiCP, leveraging both labeled data and unlabeled data for calibration.
arXiv Detail & Related papers (2025-05-27T12:57:44Z) - Conformal Uncertainty Indicator for Continual Test-Time Adaptation [16.248749460383227]
We propose a Conformal Uncertainty Indicator (CUI) for Continual Test-Time Adaptation (CTTA)<n>We leverage Conformal Prediction (CP) to generate prediction sets that include the true label with a specified coverage probability.<n>Experiments confirm that CUI effectively estimates uncertainty and improves adaptation performance across various existing CTTA methods.
arXiv Detail & Related papers (2025-02-05T08:47:18Z) - Conformal Inductive Graph Neural Networks [58.450154976190795]
Conformal prediction (CP) transforms any model's output into prediction sets guaranteed to include (cover) the true label.
CP requires exchangeability, a relaxation of the i.i.d. assumption, to obtain a valid distribution-free coverage guarantee.
conventional CP cannot be applied in inductive settings due to the implicit shift in the (calibration) scores caused by message passing with the new nodes.
We prove that the guarantee holds independently of the prediction time, e.g. upon arrival of a new node/edge or at any subsequent moment.
arXiv Detail & Related papers (2024-07-12T11:12:49Z) - Approximate Conditional Coverage via Neural Model Approximations [0.030458514384586396]
We analyze a data-driven procedure for obtaining empirically reliable approximate conditional coverage.
We demonstrate the potential for substantial (and otherwise unknowable) under-coverage with split-conformal alternatives with marginal coverage guarantees.
arXiv Detail & Related papers (2022-05-28T02:59:05Z) - Distribution-free uncertainty quantification for classification under
label shift [105.27463615756733]
We focus on uncertainty quantification (UQ) for classification problems via two avenues.
We first argue that label shift hurts UQ, by showing degradation in coverage and calibration.
We examine these techniques theoretically in a distribution-free framework and demonstrate their excellent practical performance.
arXiv Detail & Related papers (2021-03-04T20:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.