Conformal Alignment: Knowing When to Trust Foundation Models with Guarantees
- URL: http://arxiv.org/abs/2405.10301v3
- Date: Tue, 05 Nov 2024 01:55:24 GMT
- Title: Conformal Alignment: Knowing When to Trust Foundation Models with Guarantees
- Authors: Yu Gui, Ying Jin, Zhimei Ren,
- Abstract summary: In radiology report generation, reports generated by a vision-language model must align with human evaluations before their use in medical decision-making.
This paper presents Conformal Alignment, a general framework for identifying units whose outputs meet an alignment criterion.
It is guaranteed that on average, a prescribed fraction of selected units indeed meet the alignment criterion, regardless of the foundation model or the data distribution.
- Score: 5.348310708453905
- License:
- Abstract: Before deploying outputs from foundation models in high-stakes tasks, it is imperative to ensure that they align with human values. For instance, in radiology report generation, reports generated by a vision-language model must align with human evaluations before their use in medical decision-making. This paper presents Conformal Alignment, a general framework for identifying units whose outputs meet a user-specified alignment criterion. It is guaranteed that on average, a prescribed fraction of selected units indeed meet the alignment criterion, regardless of the foundation model or the data distribution. Given any pre-trained model and new units with model-generated outputs, Conformal Alignment leverages a set of reference data with ground-truth alignment status to train an alignment predictor. It then selects new units whose predicted alignment scores surpass a data-dependent threshold, certifying their corresponding outputs as trustworthy. Through applications to question answering and radiology report generation, we demonstrate that our method is able to accurately identify units with trustworthy outputs via lightweight training over a moderate amount of reference data. En route, we investigate the informativeness of various features in alignment prediction and combine them with standard models to construct the alignment predictor.
Related papers
- Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.
Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.
We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Stochastic Online Conformal Prediction with Semi-Bandit Feedback [29.334511328067777]
We consider the online learning setting, where examples arrive over time, and the goal is to construct prediction sets dynamically.
We propose a novel conformal prediction algorithm targeted at this setting, and prove that it obtains sublinear regret compared to the optimal conformal predictor.
arXiv Detail & Related papers (2024-05-22T00:42:49Z) - Confidence on the Focal: Conformal Prediction with Selection-Conditional Coverage [6.010965256037659]
Conformal prediction builds marginally valid prediction intervals that cover the unknown outcome of a randomly drawn new test point with a prescribed probability.
In such cases, marginally valid conformal prediction intervals may not provide valid coverage for the focal unit(s) due to selection bias.
This paper presents a general framework for constructing a prediction set with finite-sample exact coverage conditional on the unit being selected.
arXiv Detail & Related papers (2024-03-06T17:18:24Z) - Predicting generalization performance with correctness discriminators [64.00420578048855]
We present a novel model that establishes upper and lower bounds on the accuracy, without requiring gold labels for the unseen data.
We show across a variety of tagging, parsing, and semantic parsing tasks that the gold accuracy is reliably between the predicted upper and lower bounds.
arXiv Detail & Related papers (2023-11-15T22:43:42Z) - Estimating Uncertainty in Multimodal Foundation Models using Public
Internet Data [15.365603519829088]
Foundation models are trained on vast amounts of data at scale using self-supervised learning.
In this paper, we address the problem of quantifying uncertainty in zero-shot predictions.
We propose a approach for uncertainty estimation in zero-shot settings using conformal prediction with web data.
arXiv Detail & Related papers (2023-10-15T19:24:52Z) - Robust Ordinal Regression for Subsets Comparisons with Interactions [2.6151761714896122]
This paper is dedicated to a robust ordinal method for learning the preferences of a decision maker between subsets.
The decision model, derived from Fishburn and LaValle, is general enough to be compatible with any strict weak order on subsets.
A predicted preference is considered reliable if all the simplest models (Occam's razor) explaining the preference data agree on it.
arXiv Detail & Related papers (2023-08-07T07:54:33Z) - Conformal Language Modeling [61.94417935386489]
We propose a novel approach to conformal prediction for generative language models (LMs)
Standard conformal prediction produces prediction sets with rigorous, statistical guarantees.
We demonstrate the promise of our approach on multiple tasks in open-domain question answering, text summarization, and radiology report generation.
arXiv Detail & Related papers (2023-06-16T21:55:08Z) - Robust Flow-based Conformal Inference (FCI) with Statistical Guarantee [4.821312633849745]
We develop a series of conformal inference methods, including building predictive sets and inferring outliers for complex and high-dimensional data.
We evaluate our method, robust flow-based conformal inference, on benchmark datasets.
arXiv Detail & Related papers (2022-05-22T04:17:30Z) - Conformal prediction for the design problem [72.14982816083297]
In many real-world deployments of machine learning, we use a prediction algorithm to choose what data to test next.
In such settings, there is a distinct type of distribution shift between the training and test data.
We introduce a method to quantify predictive uncertainty in such settings.
arXiv Detail & Related papers (2022-02-08T02:59:12Z) - Summary-Source Proposition-level Alignment: Task, Datasets and
Supervised Baseline [94.0601799665342]
Aligning sentences in a reference summary with their counterparts in source documents was shown as a useful auxiliary summarization task.
We propose establishing summary-source alignment as an explicit task, while introducing two major novelties.
We create a novel training dataset for proposition-level alignment, derived automatically from available summarization evaluation data.
We present a supervised proposition alignment baseline model, showing improved alignment-quality over the unsupervised approach.
arXiv Detail & Related papers (2020-09-01T17:27:12Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.