When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making
- URL: http://arxiv.org/abs/2011.06167v3
- Date: Mon, 12 Jun 2023 21:57:31 GMT
- Title: When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making
- Authors: Sean McGrath, Parth Mehta, Alexandra Zytek, Isaac Lage, Himabindu
Lakkaraju
- Abstract summary: We carry out user studies to assess how people with differing levels of expertise respond to different types of predictive uncertainty.
We found that showing posterior predictive distributions led to smaller disagreements with the ML model's predictions.
This suggests that posterior predictive distributions can potentially serve as useful decision aids which should be used with caution and take into account the type of distribution and the expertise of the human.
- Score: 68.19284302320146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As machine learning (ML) models are increasingly being employed to assist
human decision makers, it becomes critical to provide these decision makers
with relevant inputs which can help them decide if and how to incorporate model
predictions into their decision making. For instance, communicating the
uncertainty associated with model predictions could potentially be helpful in
this regard. In this work, we carry out user studies (1,330 responses from 190
participants) to systematically assess how people with differing levels of
expertise respond to different types of predictive uncertainty (i.e., posterior
predictive distributions with different shapes and variances) in the context of
ML assisted decision making for predicting apartment rental prices. We found
that showing posterior predictive distributions led to smaller disagreements
with the ML model's predictions, regardless of the shapes and variances of the
posterior predictive distributions we considered, and that these effects may be
sensitive to expertise in both ML and the domain. This suggests that posterior
predictive distributions can potentially serve as useful decision aids which
should be used with caution and take into account the type of distribution and
the expertise of the human.
Related papers
- Uncertainty-based Fairness Measures [14.61416119202288]
Unfair predictions of machine learning (ML) models impede their broad acceptance in real-world settings.
We show that an ML model may appear to be fair with existing point-based fairness measures but biased against a demographic group in terms of prediction uncertainties.
arXiv Detail & Related papers (2023-12-18T15:49:03Z) - Introducing an Improved Information-Theoretic Measure of Predictive
Uncertainty [6.3398383724486544]
Predictive uncertainty is commonly measured by the entropy of the Bayesian model average (BMA) predictive distribution.
We introduce a theoretically grounded measure to overcome these limitations.
We find that our introduced measure behaves more reasonably in controlled synthetic tasks.
arXiv Detail & Related papers (2023-11-14T16:55:12Z) - Model-agnostic variable importance for predictive uncertainty: an entropy-based approach [1.912429179274357]
We show how existing methods in explainability can be extended to uncertainty-aware models.
We demonstrate the utility of these approaches to understand both the sources of uncertainty and their impact on model performance.
arXiv Detail & Related papers (2023-10-19T15:51:23Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - On the Fairness of Machine-Assisted Human Decisions [3.4069627091757178]
We show that the inclusion of a biased human decision-maker can revert common relationships between the structure of the algorithm and the qualities of resulting decisions.
In the lab experiment, we demonstrate how predictions informed by gender-specific information can reduce average gender disparities in decisions.
arXiv Detail & Related papers (2021-10-28T17:24:45Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Quantifying sources of uncertainty in drug discovery predictions with
probabilistic models [0.0]
Knowing the uncertainty in a prediction is critical when making expensive investment decisions and when patient safety is paramount.
Machine learning (ML) models in drug discovery typically provide only a single best estimate and ignore all sources of uncertainty.
Probabilistic predictive models (PPMs) can incorporate uncertainty in both the data and model, and return a distribution of predicted values.
arXiv Detail & Related papers (2021-05-18T18:54:54Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.