Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty
- URL: http://arxiv.org/abs/2011.07586v3
- Date: Tue, 4 May 2021 10:33:03 GMT
- Title: Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty
- Authors: Umang Bhatt, Javier Antor\'an, Yunfeng Zhang, Q. Vera Liao, Prasanna
Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melan\c{c}on, Ranganath
Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika
Srikumar, Adrian Weller, Alice Xiang
- Abstract summary: We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
- Score: 66.17147341354577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic transparency entails exposing system properties to various
stakeholders for purposes that include understanding, improving, and contesting
predictions. Until now, most research into algorithmic transparency has
predominantly focused on explainability. Explainability attempts to provide
reasons for a machine learning model's behavior to stakeholders. However,
understanding a model's specific behavior alone might not be enough for
stakeholders to gauge whether the model is wrong or lacks sufficient knowledge
to solve the task at hand. In this paper, we argue for considering a
complementary form of transparency by estimating and communicating the
uncertainty associated with model predictions. First, we discuss methods for
assessing uncertainty. Then, we characterize how uncertainty can be used to
mitigate model unfairness, augment decision-making, and build trustworthy
systems. Finally, we outline methods for displaying uncertainty to stakeholders
and recommend how to collect information required for incorporating uncertainty
into existing ML pipelines. This work constitutes an interdisciplinary review
drawn from literature spanning machine learning, visualization/HCI, design,
decision-making, and fairness. We aim to encourage researchers and
practitioners to measure, communicate, and use uncertainty as a form of
transparency.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Explainability through uncertainty: Trustworthy decision-making with neural networks [1.104960878651584]
Uncertainty is a key feature of any machine learning model.
It is particularly important in neural networks, which tend to be overconfident.
Uncertainty as XAI improves the model's trustworthiness in downstream decision-making tasks.
arXiv Detail & Related papers (2024-03-15T10:22:48Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Improving the Reliability of Large Language Models by Leveraging
Uncertainty-Aware In-Context Learning [76.98542249776257]
Large-scale language models often face the challenge of "hallucination"
We introduce an uncertainty-aware in-context learning framework to empower the model to enhance or reject its output in response to uncertainty.
arXiv Detail & Related papers (2023-10-07T12:06:53Z) - Accountable and Explainable Methods for Complex Reasoning over Text [5.571369922847262]
Accountability and transparency of Machine Learning models have been posed as critical desiderata by works in policy and law, philosophy, and computer science.
This thesis expands our collective knowledge in the areas of accountability and transparency of ML models developed for complex reasoning tasks over text.
arXiv Detail & Related papers (2022-11-09T15:14:52Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Show or Suppress? Managing Input Uncertainty in Machine Learning Model
Explanations [5.695163312473304]
Feature attribution is widely used in interpretable machine learning to explain how influential each measured input feature value is for an output inference.
It is unclear how the awareness of input uncertainty can affect the trust in explanations.
We propose two approaches to help users to manage their perception of uncertainty in a model explanation.
arXiv Detail & Related papers (2021-01-23T13:10:48Z) - Getting a CLUE: A Method for Explaining Uncertainty Estimates [30.367995696223726]
We propose a novel method for interpreting uncertainty estimates from differentiable probabilistic models.
Our method, Counterfactual Latent Uncertainty Explanations (CLUE), indicates how to change an input, while keeping it on the data manifold.
arXiv Detail & Related papers (2020-06-11T21:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.