Designing Decision Support Systems Using Counterfactual Prediction Sets
- URL: http://arxiv.org/abs/2306.03928v3
- Date: Tue, 16 Jul 2024 16:52:02 GMT
- Title: Designing Decision Support Systems Using Counterfactual Prediction Sets
- Authors: Eleni Straitouri, Manuel Gomez Rodriguez,
- Abstract summary: Decision support systems for classification tasks are predominantly designed to predict the value of the ground truth labels.
This paper revisits the design of this type of systems from the perspective of online learning.
We develop a methodology that does not require, nor assumes, an expert model.
- Score: 15.121082690769525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision support systems for classification tasks are predominantly designed to predict the value of the ground truth labels. However, since their predictions are not perfect, these systems also need to make human experts understand when and how to use these predictions to update their own predictions. Unfortunately, this has been proven challenging. In this context, it has been recently argued that an alternative type of decision support systems may circumvent this challenge. Rather than providing a single label prediction, these systems provide a set of label prediction values constructed using a conformal predictor, namely a prediction set, and forcefully ask experts to predict a label value from the prediction set. However, the design and evaluation of these systems have so far relied on stylized expert models, questioning their promise. In this paper, we revisit the design of this type of systems from the perspective of online learning and develop a methodology that does not require, nor assumes, an expert model. Our methodology leverages the nested structure of the prediction sets provided by any conformal predictor and a natural counterfactual monotonicity assumption to achieve an exponential improvement in regret in comparison to vanilla bandit algorithms. We conduct a large-scale human subject study ($n = 2{,}751$) to compare our methodology to several competitive baselines. The results show that, for decision support systems based on prediction sets, limiting experts' level of agency leads to greater performance than allowing experts to always exercise their own agency. We have made available the data gathered in our human subject study as well as an open source implementation of our system at https://github.com/Networks-Learning/counterfactual-prediction-sets.
Related papers
- Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets [14.478233576808876]
In decision support systems based on prediction sets, there is a trade-off between accuracy and causalfactual harm.
We show that under a natural, unverifiable, monotonicity assumption, we can estimate how frequently a system may cause harm using predictions made by humans on their own.
We also show that, under a weaker assumption, which can be verified, we can bound how frequently a system may cause harm again using only predictions made by humans on their own.
arXiv Detail & Related papers (2024-06-10T18:00:00Z) - Towards Human-AI Complementarity with Prediction Sets [14.071862670474832]
Decision support systems based on prediction sets have proven to be effective at helping human experts solve classification tasks.
We show that the prediction sets constructed using conformal prediction are, in general, suboptimal in terms of average accuracy.
We introduce a greedy algorithm that, for a large class of expert models and non-optimal scores, is guaranteed to find prediction sets that provably offer equal or greater performance.
arXiv Detail & Related papers (2024-05-27T18:00:00Z) - Incorporating Experts' Judgment into Machine Learning Models [2.5363839239628843]
In some cases, domain experts might have a judgment about the expected outcome that might conflict with the prediction of machine learning models.
We present a novel framework that aims at leveraging experts' judgment to mitigate the conflict.
arXiv Detail & Related papers (2023-04-24T07:32:49Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Improving Expert Predictions with Conformal Prediction [14.850555720410677]
existing systems typically require experts to understand when to cede agency to the system or when to exercise their own agency.
We develop an automated decision support system that allows experts to make more accurate predictions and is robust to the accuracy of the predictor relies on.
arXiv Detail & Related papers (2022-01-28T09:35:37Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Towards Unbiased and Accurate Deferral to Multiple Experts [19.24068936057053]
We propose a framework that simultaneously learns a classifier and a deferral system, with the deferral system choosing to defer to one or more human experts.
We test our framework on a synthetic dataset and a content moderation dataset with biased synthetic experts, and show that it significantly improves the accuracy and fairness of the final predictions.
arXiv Detail & Related papers (2021-02-25T17:08:39Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making [68.19284302320146]
We carry out user studies to assess how people with differing levels of expertise respond to different types of predictive uncertainty.
We found that showing posterior predictive distributions led to smaller disagreements with the ML model's predictions.
This suggests that posterior predictive distributions can potentially serve as useful decision aids which should be used with caution and take into account the type of distribution and the expertise of the human.
arXiv Detail & Related papers (2020-11-12T02:23:53Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.