Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets
- URL: http://arxiv.org/abs/2406.06671v1
- Date: Mon, 10 Jun 2024 18:00:00 GMT
- Title: Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets
- Authors: Eleni Straitouri, Suhas Thejaswi, Manuel Gomez Rodriguez,
- Abstract summary: In decision support systems based on prediction sets, there is a trade-off between accuracy and causalfactual harm.
We show that under a natural, unverifiable, monotonicity assumption, we can estimate how frequently a system may cause harm using predictions made by humans on their own.
We also show that, under a weaker assumption, which can be verified, we can bound how frequently a system may cause harm again using only predictions made by humans on their own.
- Score: 14.478233576808876
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision support systems based on prediction sets help humans solve multiclass classification tasks by narrowing down the set of potential label values to a subset of them, namely a prediction set, and asking them to always predict label values from the prediction sets. While this type of systems have been proven to be effective at improving the average accuracy of the predictions made by humans, by restricting human agency, they may cause harm$\unicode{x2014}$a human who has succeeded at predicting the ground-truth label of an instance on their own may have failed had they used these systems. In this paper, our goal is to control how frequently a decision support system based on prediction sets may cause harm, by design. To this end, we start by characterizing the above notion of harm using the theoretical framework of structural causal models. Then, we show that, under a natural, albeit unverifiable, monotonicity assumption, we can estimate how frequently a system may cause harm using only predictions made by humans on their own. Further, we also show that, under a weaker monotonicity assumption, which can be verified experimentally, we can bound how frequently a system may cause harm again using only predictions made by humans on their own. Building upon these assumptions, we introduce a computational framework to design decision support systems based on prediction sets that are guaranteed to cause harm less frequently than a user-specified value using conformal risk control. We validate our framework using real human predictions from two different human subject studies and show that, in decision support systems based on prediction sets, there is a trade-off between accuracy and counterfactual harm.
Related papers
- Conformal Prediction Sets Can Cause Disparate Impact [4.61590049339329]
Conformal prediction is a promising method for quantifying the uncertainty of machine learning models.
We show that providing prediction sets can increase the unfairness of their decisions.
Instead of equalizing coverage, we propose to equalize set sizes across groups which empirically leads to more fair outcomes.
arXiv Detail & Related papers (2024-10-02T18:00:01Z) - Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Towards Human-AI Complementarity with Prediction Sets [14.071862670474832]
Decision support systems based on prediction sets have proven to be effective at helping human experts solve classification tasks.
We show that the prediction sets constructed using conformal prediction are, in general, suboptimal in terms of average accuracy.
We introduce a greedy algorithm that, for a large class of expert models and non-optimal scores, is guaranteed to find prediction sets that provably offer equal or greater performance.
arXiv Detail & Related papers (2024-05-27T18:00:00Z) - Conformal Prediction Sets Improve Human Decision Making [5.151594941369301]
We study the usefulness of conformal prediction sets as an aid for human decision making.
We find that when humans are given conformal prediction sets their accuracy on tasks improves compared to fixed-size prediction sets with the same coverage guarantee.
arXiv Detail & Related papers (2024-01-24T19:01:22Z) - Designing Decision Support Systems Using Counterfactual Prediction Sets [15.121082690769525]
Decision support systems for classification tasks are predominantly designed to predict the value of the ground truth labels.
This paper revisits the design of this type of systems from the perspective of online learning.
We develop a methodology that does not require, nor assumes, an expert model.
arXiv Detail & Related papers (2023-06-06T18:00:09Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Improving Expert Predictions with Conformal Prediction [14.850555720410677]
existing systems typically require experts to understand when to cede agency to the system or when to exercise their own agency.
We develop an automated decision support system that allows experts to make more accurate predictions and is robust to the accuracy of the predictor relies on.
arXiv Detail & Related papers (2022-01-28T09:35:37Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Right Decisions from Wrong Predictions: A Mechanism Design Alternative
to Individual Calibration [107.15813002403905]
Decision makers often need to rely on imperfect probabilistic forecasts.
We propose a compensation mechanism ensuring that the forecasted utility matches the actually accrued utility.
We demonstrate an application showing how passengers could confidently optimize individual travel plans based on flight delay probabilities.
arXiv Detail & Related papers (2020-11-15T08:22:39Z) - When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making [68.19284302320146]
We carry out user studies to assess how people with differing levels of expertise respond to different types of predictive uncertainty.
We found that showing posterior predictive distributions led to smaller disagreements with the ML model's predictions.
This suggests that posterior predictive distributions can potentially serve as useful decision aids which should be used with caution and take into account the type of distribution and the expertise of the human.
arXiv Detail & Related papers (2020-11-12T02:23:53Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.