Improving Expert Predictions with Conformal Prediction
- URL: http://arxiv.org/abs/2201.12006v5
- Date: Fri, 30 Jun 2023 13:33:32 GMT
- Title: Improving Expert Predictions with Conformal Prediction
- Authors: Eleni Straitouri and Lequn Wang and Nastaran Okati and Manuel Gomez
Rodriguez
- Abstract summary: existing systems typically require experts to understand when to cede agency to the system or when to exercise their own agency.
We develop an automated decision support system that allows experts to make more accurate predictions and is robust to the accuracy of the predictor relies on.
- Score: 14.850555720410677
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated decision support systems promise to help human experts solve
multiclass classification tasks more efficiently and accurately. However,
existing systems typically require experts to understand when to cede agency to
the system or when to exercise their own agency. Otherwise, the experts may be
better off solving the classification tasks on their own. In this work, we
develop an automated decision support system that, by design, does not require
experts to understand when to trust the system to improve performance. Rather
than providing (single) label predictions and letting experts decide when to
trust these predictions, our system provides sets of label predictions
constructed using conformal prediction$\unicode{x2014}$prediction
sets$\unicode{x2014}$and forcefully asks experts to predict labels from these
sets. By using conformal prediction, our system can precisely trade-off the
probability that the true label is not in the prediction set, which determines
how frequently our system will mislead the experts, and the size of the
prediction set, which determines the difficulty of the classification task the
experts need to solve using our system. In addition, we develop an efficient
and near-optimal search method to find the conformal predictor under which the
experts benefit the most from using our system. Simulation experiments using
synthetic and real expert predictions demonstrate that our system may help
experts make more accurate predictions and is robust to the accuracy of the
classifier the conformal predictor relies on.
Related papers
- Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets [14.478233576808876]
In decision support systems based on prediction sets, there is a trade-off between accuracy and causalfactual harm.
We show that under a natural, unverifiable, monotonicity assumption, we can estimate how frequently a system may cause harm using predictions made by humans on their own.
We also show that, under a weaker assumption, which can be verified, we can bound how frequently a system may cause harm again using only predictions made by humans on their own.
arXiv Detail & Related papers (2024-06-10T18:00:00Z) - Towards Human-AI Complementarity with Prediction Sets [14.071862670474832]
Decision support systems based on prediction sets have proven to be effective at helping human experts solve classification tasks.
We show that the prediction sets constructed using conformal prediction are, in general, suboptimal in terms of average accuracy.
We introduce a greedy algorithm that, for a large class of expert models and non-optimal scores, is guaranteed to find prediction sets that provably offer equal or greater performance.
arXiv Detail & Related papers (2024-05-27T18:00:00Z) - Designing Decision Support Systems Using Counterfactual Prediction Sets [15.121082690769525]
Decision support systems for classification tasks are predominantly designed to predict the value of the ground truth labels.
This paper revisits the design of this type of systems from the perspective of online learning.
We develop a methodology that does not require, nor assumes, an expert model.
arXiv Detail & Related papers (2023-06-06T18:00:09Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Towards Unbiased and Accurate Deferral to Multiple Experts [19.24068936057053]
We propose a framework that simultaneously learns a classifier and a deferral system, with the deferral system choosing to defer to one or more human experts.
We test our framework on a synthetic dataset and a content moderation dataset with biased synthetic experts, and show that it significantly improves the accuracy and fairness of the final predictions.
arXiv Detail & Related papers (2021-02-25T17:08:39Z) - Distribution-Free, Risk-Controlling Prediction Sets [112.9186453405701]
We show how to generate set-valued predictions from a black-box predictor that control the expected loss on future test points at a user-specified level.
Our approach provides explicit finite-sample guarantees for any dataset by using a holdout set to calibrate the size of the prediction sets.
arXiv Detail & Related papers (2021-01-07T18:59:33Z) - Right Decisions from Wrong Predictions: A Mechanism Design Alternative
to Individual Calibration [107.15813002403905]
Decision makers often need to rely on imperfect probabilistic forecasts.
We propose a compensation mechanism ensuring that the forecasted utility matches the actually accrued utility.
We demonstrate an application showing how passengers could confidently optimize individual travel plans based on flight delay probabilities.
arXiv Detail & Related papers (2020-11-15T08:22:39Z) - Malicious Experts versus the multiplicative weights algorithm in online
prediction [85.62472761361107]
We consider a prediction problem with two experts and a forecaster.
We assume that one of the experts is honest and makes correct prediction with probability $mu$ at each round.
The other one is malicious, who knows true outcomes at each round and makes predictions in order to maximize the loss of the forecaster.
arXiv Detail & Related papers (2020-03-18T20:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.