Towards Human-AI Complementarity with Predictions Sets
- URL: http://arxiv.org/abs/2405.17544v1
- Date: Mon, 27 May 2024 18:00:00 GMT
- Title: Towards Human-AI Complementarity with Predictions Sets
- Authors: Giovanni De Toni, Nastaran Okati, Suhas Thejaswi, Eleni Straitouri, Manuel Gomez-Rodriguez,
- Abstract summary: Decision support systems based on prediction sets have proven to be effective at helping human experts solve classification tasks.
We show that the prediction sets constructed using conformal prediction are, in general, suboptimal in terms of average accuracy.
We introduce a greedy algorithm that, for a large class of expert models and non-optimal scores, is guaranteed to find prediction sets that provably offer equal or greater performance.
- Score: 14.071862670474832
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision support systems based on prediction sets have proven to be effective at helping human experts solve classification tasks. Rather than providing single-label predictions, these systems provide sets of label predictions constructed using conformal prediction, namely prediction sets, and ask human experts to predict label values from these sets. In this paper, we first show that the prediction sets constructed using conformal prediction are, in general, suboptimal in terms of average accuracy. Then, we show that the problem of finding the optimal prediction sets under which the human experts achieve the highest average accuracy is NP-hard. More strongly, unless P = NP, we show that the problem is hard to approximate to any factor less than the size of the label set. However, we introduce a simple and efficient greedy algorithm that, for a large class of expert models and non-conformity scores, is guaranteed to find prediction sets that provably offer equal or greater performance than those constructed using conformal prediction. Further, using a simulation study with both synthetic and real expert predictions, we demonstrate that, in practice, our greedy algorithm finds near-optimal prediction sets offering greater performance than conformal prediction.
Related papers
- Conformal Prediction Sets Improve Human Decision Making [5.151594941369301]
We study the usefulness of conformal prediction sets as an aid for human decision making.
We find that when humans are given conformal prediction sets their accuracy on tasks improves compared to fixed-size prediction sets with the same coverage guarantee.
arXiv Detail & Related papers (2024-01-24T19:01:22Z) - Evaluating the Utility of Conformal Prediction Sets for AI-Advised Image Labeling [14.009838333100612]
Conformal prediction sets generate prediction sets with specified coverage.
We compare the utility of conformal prediction sets to displays of Top-1 and Top-k predictions for AI-advised image labeling.
arXiv Detail & Related papers (2024-01-16T23:19:30Z) - PAC Prediction Sets Under Label Shift [52.30074177997787]
Prediction sets capture uncertainty by predicting sets of labels rather than individual labels.
We propose a novel algorithm for constructing prediction sets with PAC guarantees in the label shift setting.
We evaluate our approach on five datasets.
arXiv Detail & Related papers (2023-10-19T17:57:57Z) - Conformal Prediction for Deep Classifier via Label Ranking [29.784336674173616]
Conformal prediction is a statistical framework that generates prediction sets with a desired coverage guarantee.
We propose a novel algorithm named $textitSorted Adaptive Prediction Sets$ (SAPS)
SAPS discards all the probability values except for the maximum softmax probability.
arXiv Detail & Related papers (2023-10-10T08:54:14Z) - Designing Decision Support Systems Using Counterfactual Prediction Sets [15.121082690769525]
Decision support systems for classification tasks are predominantly designed to predict the value of the ground truth labels.
This paper revisits the design of this type of systems from the perspective of online learning.
We develop a methodology that does not require, nor assumes, an expert model.
arXiv Detail & Related papers (2023-06-06T18:00:09Z) - Predictive Inference with Feature Conformal Prediction [80.77443423828315]
We propose feature conformal prediction, which extends the scope of conformal prediction to semantic feature spaces.
From a theoretical perspective, we demonstrate that feature conformal prediction provably outperforms regular conformal prediction under mild assumptions.
Our approach could be combined with not only vanilla conformal prediction, but also other adaptive conformal prediction methods.
arXiv Detail & Related papers (2022-10-01T02:57:37Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Optimized conformal classification using gradient descent approximation [0.2538209532048866]
Conformal predictors allow predictions to be made with a user-defined confidence level.
We consider an approach to train the conformal predictor directly with maximum predictive efficiency.
We test the method on several real world data sets and find that the method is promising.
arXiv Detail & Related papers (2021-05-24T13:14:41Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - AutoCP: Automated Pipelines for Accurate Prediction Intervals [84.16181066107984]
This paper proposes an AutoML framework called Automatic Machine Learning for Conformal Prediction (AutoCP)
Unlike the familiar AutoML frameworks that attempt to select the best prediction model, AutoCP constructs prediction intervals that achieve the user-specified target coverage rate.
We tested AutoCP on a variety of datasets and found that it significantly outperforms benchmark algorithms.
arXiv Detail & Related papers (2020-06-24T23:13:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.