Calibrated Explanations: with Uncertainty Information and
Counterfactuals
- URL: http://arxiv.org/abs/2305.02305v3
- Date: Sat, 4 Nov 2023 18:53:16 GMT
- Title: Calibrated Explanations: with Uncertainty Information and
Counterfactuals
- Authors: Helena Lofstrom, Tuwe Lofstrom, Ulf Johansson, Cecilia Sonstrod
- Abstract summary: Calibrated Explanations (CE) is built on the foundation of Venn-Abers.
It provides uncertainty quantification for both feature weights and the model's probability estimates.
Results from an evaluation with 25 benchmark datasets underscore the efficacy of CE.
- Score: 0.1843404256219181
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While local explanations for AI models can offer insights into individual
predictions, such as feature importance, they are plagued by issues like
instability. The unreliability of feature weights, often skewed due to poorly
calibrated ML models, deepens these challenges. Moreover, the critical aspect
of feature importance uncertainty remains mostly unaddressed in Explainable AI
(XAI). The novel feature importance explanation method presented in this paper,
called Calibrated Explanations (CE), is designed to tackle these issues
head-on. Built on the foundation of Venn-Abers, CE not only calibrates the
underlying model but also delivers reliable feature importance explanations
with an exact definition of the feature weights. CE goes beyond conventional
solutions by addressing output uncertainty. It accomplishes this by providing
uncertainty quantification for both feature weights and the model's probability
estimates. Additionally, CE is model-agnostic, featuring easily comprehensible
conditional rules and the ability to generate counterfactual explanations with
embedded uncertainty quantification. Results from an evaluation with 25
benchmark datasets underscore the efficacy of CE, making it stand as a fast,
reliable, stable, and robust solution.
Related papers
- Fast Calibrated Explanations: Efficient and Uncertainty-Aware Explanations for Machine Learning Models [41.82622187379551]
This paper introduces Fast Calibrated Explanations, a method for generating rapid, uncertainty-aware explanations for machine learning models.
By incorporating perturbation techniques from ConformaSight into the core elements of Calibrated Explanations, we achieve significant speedups.
While the new method sacrifices a small degree of detail, it excels in computational efficiency, making it ideal for high-stakes, real-time applications.
arXiv Detail & Related papers (2024-10-28T15:29:35Z) - Ensured: Explanations for Decreasing the Epistemic Uncertainty in Predictions [1.2289361708127877]
Epistem uncertainty adds a crucial dimension to explanation quality.
We introduce new types of explanations that specifically target this uncertainty.
We introduce a new metric, ensured ranking, designed to help users identify the most reliable explanations.
arXiv Detail & Related papers (2024-10-07T20:21:51Z) - Cycles of Thought: Measuring LLM Confidence through Stable Explanations [53.15438489398938]
Large language models (LLMs) can reach and even surpass human-level accuracy on a variety of benchmarks, but their overconfidence in incorrect responses is still a well-documented failure mode.
We propose a framework for measuring an LLM's uncertainty with respect to the distribution of generated explanations for an answer.
arXiv Detail & Related papers (2024-06-05T16:35:30Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations [1.0370398945228227]
We introduce LaPLACE-explainer, designed to provide probabilistic cause-and-effect explanations for machine learning models.
The LaPLACE-Explainer component leverages the concept of a Markov blanket to establish statistical boundaries between relevant and non-relevant features.
Our approach offers causal explanations and outperforms LIME and SHAP in terms of local accuracy and consistency of explained features.
arXiv Detail & Related papers (2023-10-01T04:09:59Z) - Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation [50.920911532133154]
The intrinsic ill-posedness and ordinal-sensitive nature of monocular depth estimation (MDE) models pose major challenges to the estimation of uncertainty degree.
We propose to model the uncertainty of MDE models from the perspective of the inherent probability distributions.
By simply introducing additional training regularization terms, our model, with surprisingly simple formations and without requiring extra modules or multiple inferences, can provide uncertainty estimations with state-of-the-art reliability.
arXiv Detail & Related papers (2023-07-19T12:11:15Z) - Evaluating AI systems under uncertain ground truth: a case study in
dermatology [44.80772162289557]
We propose a metric for measuring annotation uncertainty and provide uncertainty-adjusted metrics for performance evaluation.
We present a case study applying our framework to skin condition classification from images where annotations are provided in the form of differential diagnoses.
arXiv Detail & Related papers (2023-07-05T10:33:45Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - Approaching Neural Network Uncertainty Realism [53.308409014122816]
Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
arXiv Detail & Related papers (2021-01-08T11:56:12Z) - Reliable Post hoc Explanations: Modeling Uncertainty in Explainability [44.9824285459365]
Black box explanations are increasingly being employed to establish model credibility in high-stakes settings.
prior work demonstrates that explanations generated by state-of-the-art techniques are inconsistent, unstable, and provide very little insight into their correctness and reliability.
We develop a novel Bayesian framework for generating local explanations along with their associated uncertainty.
arXiv Detail & Related papers (2020-08-11T22:52:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.