Conformal Prediction and Human Decision Making
- URL: http://arxiv.org/abs/2503.11709v2
- Date: Tue, 18 Mar 2025 16:16:17 GMT
- Title: Conformal Prediction and Human Decision Making
- Authors: Jessica Hullman, Yifan Wu, Dawei Xie, Ziyang Guo, Andrew Gelman,
- Abstract summary: Methods to quantify uncertainty in predictions from arbitrary models are in demand in high-stakes domains like medicine and finance.<n>Conformal prediction has emerged as a popular method for producing a set of predictions with specified average coverage.<n>However, the value of conformal prediction sets to assist human decisions remains elusive due to the murky relationship between coverage guarantees and decision makers' goals and strategies.
- Score: 24.565425060007474
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Methods to quantify uncertainty in predictions from arbitrary models are in demand in high-stakes domains like medicine and finance. Conformal prediction has emerged as a popular method for producing a set of predictions with specified average coverage, in place of a single prediction and confidence value. However, the value of conformal prediction sets to assist human decisions remains elusive due to the murky relationship between coverage guarantees and decision makers' goals and strategies. How should we think about conformal prediction sets as a form of decision support? We outline a decision theoretic framework for evaluating predictive uncertainty as informative signals, then contrast what can be said within this framework about idealized use of calibrated probabilities versus conformal prediction sets. Informed by prior empirical results and theories of human decisions under uncertainty, we formalize a set of possible strategies by which a decision maker might use a prediction set. We identify ways in which conformal prediction sets and posthoc predictive uncertainty quantification more broadly are in tension with common goals and needs in human-AI decision making. We give recommendations for future research in predictive uncertainty quantification to support human decision makers.
Related papers
- Truthful Elicitation of Imprecise Forecasts [11.153198087930756]
We propose a framework for scoring imprecise forecasts -- forecasts given as a set of beliefs.
We show that truthful elicitation of imprecise forecasts is achievable using proper scoring rules randomized over the aggregation procedure.
arXiv Detail & Related papers (2025-03-20T17:53:35Z) - Bin-Conditional Conformal Prediction of Fatalities from Armed Conflict [0.5312303275762104]
We introduce bin-conditional conformal prediction (BCCP), which enhances standard conformal prediction by ensuring consistent coverage rates across user-defined subsets.<n>Compared to standard conformal prediction, BCCP offers improved local coverage, though this comes at the cost of slightly wider prediction intervals.
arXiv Detail & Related papers (2024-10-18T14:41:42Z) - Self-Calibrating Conformal Prediction [16.606421967131524]
We introduce Self-Calibrating Conformal Prediction to deliver calibrated point predictions alongside prediction intervals with finite-sample validity conditional on these predictions.
We show that our method improves calibrated interval efficiency through model calibration and offers a practical alternative to feature-conditional validity.
arXiv Detail & Related papers (2024-02-11T21:12:21Z) - Conformal Prediction Sets Improve Human Decision Making [5.151594941369301]
We study the usefulness of conformal prediction sets as an aid for human decision making.
We find that when humans are given conformal prediction sets their accuracy on tasks improves compared to fixed-size prediction sets with the same coverage guarantee.
arXiv Detail & Related papers (2024-01-24T19:01:22Z) - Robust Design and Evaluation of Predictive Algorithms under Unobserved Confounding [2.8498944632323755]
We propose a unified framework for the robust design and evaluation of predictive algorithms in selectively observed data.
We impose general assumptions on how much the outcome may vary on average between unselected and selected units.
We develop debiased machine learning estimators for the bounds on a large class of predictive performance estimands.
arXiv Detail & Related papers (2022-12-19T20:41:44Z) - Making Decisions under Outcome Performativity [9.962472413291803]
We introduce a new optimality concept -- performative omniprediction.
A performative omnipredictor is a single predictor that simultaneously encodes the optimal decision rule.
We show that efficient performative omnipredictors exist, under a natural restriction of performative prediction.
arXiv Detail & Related papers (2022-10-04T17:04:47Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making [68.19284302320146]
We carry out user studies to assess how people with differing levels of expertise respond to different types of predictive uncertainty.
We found that showing posterior predictive distributions led to smaller disagreements with the ML model's predictions.
This suggests that posterior predictive distributions can potentially serve as useful decision aids which should be used with caution and take into account the type of distribution and the expertise of the human.
arXiv Detail & Related papers (2020-11-12T02:23:53Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z) - Fast, Optimal, and Targeted Predictions using Parametrized Decision
Analysis [0.0]
We develop a class of parametrized actions for Bayesian decision analysis that produce optimal, scalable, and simple targeted predictions.
Predictions are constructed for physical activity data from the National Health and Nutrition Examination Survey.
arXiv Detail & Related papers (2020-06-23T15:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.