Show or Suppress? Managing Input Uncertainty in Machine Learning Model
Explanations
- URL: http://arxiv.org/abs/2101.09498v1
- Date: Sat, 23 Jan 2021 13:10:48 GMT
- Title: Show or Suppress? Managing Input Uncertainty in Machine Learning Model
Explanations
- Authors: Danding Wang, Wencan Zhang and Brian Y. Lim
- Abstract summary: Feature attribution is widely used in interpretable machine learning to explain how influential each measured input feature value is for an output inference.
It is unclear how the awareness of input uncertainty can affect the trust in explanations.
We propose two approaches to help users to manage their perception of uncertainty in a model explanation.
- Score: 5.695163312473304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature attribution is widely used in interpretable machine learning to
explain how influential each measured input feature value is for an output
inference. However, measurements can be uncertain, and it is unclear how the
awareness of input uncertainty can affect the trust in explanations. We propose
and study two approaches to help users to manage their perception of
uncertainty in a model explanation: 1) transparently show uncertainty in
feature attributions to allow users to reflect on, and 2) suppress attribution
to features with uncertain measurements and shift attribution to other features
by regularizing with an uncertainty penalty. Through simulation experiments,
qualitative interviews, and quantitative user evaluations, we identified the
benefits of moderately suppressing attribution uncertainty, and concerns
regarding showing attribution uncertainty. This work adds to the understanding
of handling and communicating uncertainty for model interpretability.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Ensured: Explanations for Decreasing the Epistemic Uncertainty in Predictions [1.2289361708127877]
Epistem uncertainty adds a crucial dimension to explanation quality.
We introduce new types of explanations that specifically target this uncertainty.
We introduce a new metric, ensured ranking, designed to help users identify the most reliable explanations.
arXiv Detail & Related papers (2024-10-07T20:21:51Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Improving the Reliability of Large Language Models by Leveraging
Uncertainty-Aware In-Context Learning [76.98542249776257]
Large-scale language models often face the challenge of "hallucination"
We introduce an uncertainty-aware in-context learning framework to empower the model to enhance or reject its output in response to uncertainty.
arXiv Detail & Related papers (2023-10-07T12:06:53Z) - Flexible Visual Recognition by Evidential Modeling of Confusion and
Ignorance [25.675733490127964]
In real-world scenarios, typical visual recognition systems could fail under two major causes, i.e., the misclassification between known classes and the excusable misbehavior on unknown-class images.
To tackle these deficiencies, flexible visual recognition should dynamically predict multiple classes when they are unconfident between choices and reject making predictions when the input is entirely out of the training distribution.
In this paper, we propose to model these two sources of uncertainty explicitly with the theory of Subjective Logic.
arXiv Detail & Related papers (2023-09-14T03:16:05Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Dive into Ambiguity: Latent Distribution Mining and Pairwise Uncertainty
Estimation for Facial Expression Recognition [59.52434325897716]
We propose a solution, named DMUE, to address the problem of annotation ambiguity from two perspectives.
For the former, an auxiliary multi-branch learning framework is introduced to better mine and describe the latent distribution in the label space.
For the latter, the pairwise relationship of semantic feature between instances are fully exploited to estimate the ambiguity extent in the instance space.
arXiv Detail & Related papers (2021-04-01T03:21:57Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Double-Uncertainty Weighted Method for Semi-supervised Learning [32.484750353853954]
We propose a double-uncertainty weighted method for semi-supervised segmentation based on the teacher-student model.
We train the teacher model using Bayesian deep learning to obtain double-uncertainty, i.e. segmentation uncertainty and feature uncertainty.
Our method outperforms the state-of-the-art uncertainty-based semi-supervised methods on two public medical datasets.
arXiv Detail & Related papers (2020-10-19T08:20:18Z) - Getting a CLUE: A Method for Explaining Uncertainty Estimates [30.367995696223726]
We propose a novel method for interpreting uncertainty estimates from differentiable probabilistic models.
Our method, Counterfactual Latent Uncertainty Explanations (CLUE), indicates how to change an input, while keeping it on the data manifold.
arXiv Detail & Related papers (2020-06-11T21:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.