Calibrated Explanations for Regression
- URL: http://arxiv.org/abs/2308.16245v3
- Date: Sat, 25 May 2024 17:29:44 GMT
- Title: Calibrated Explanations for Regression
- Authors: Tuwe Löfström, Helena Löfström, Ulf Johansson, Cecilia Sönströd, Rudy Matela,
- Abstract summary: Calibrated Explanations for regression provides fast, reliable, stable, and robust explanations.
Calibrated Explanations for probabilistic regression provides an entirely new way of creating explanations.
An implementation in Python is freely available on GitHub and for installation using both pip and conda.
- Score: 1.2058600649065616
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial Intelligence (AI) is often an integral part of modern decision support systems. The best-performing predictive models used in AI-based decision support systems lack transparency. Explainable Artificial Intelligence (XAI) aims to create AI systems that can explain their rationale to human users. Local explanations in XAI can provide information about the causes of individual predictions in terms of feature importance. However, a critical drawback of existing local explanation methods is their inability to quantify the uncertainty associated with a feature's importance. This paper introduces an extension of a feature importance explanation method, Calibrated Explanations, previously only supporting classification, with support for standard regression and probabilistic regression, i.e., the probability that the target is above an arbitrary threshold. The extension for regression keeps all the benefits of Calibrated Explanations, such as calibration of the prediction from the underlying model with confidence intervals, uncertainty quantification of feature importance, and allows both factual and counterfactual explanations. Calibrated Explanations for standard regression provides fast, reliable, stable, and robust explanations. Calibrated Explanations for probabilistic regression provides an entirely new way of creating probabilistic explanations from any ordinary regression model, allowing dynamic selection of thresholds. The method is model agnostic with easily understood conditional rules. An implementation in Python is freely available on GitHub and for installation using both pip and conda, making the results in this paper easily replicable.
Related papers
- CAGE: Causality-Aware Shapley Value for Global Explanations [4.017708359820078]
One way to explain AI models is to elucidate the predictive importance of the input features for the AI model.
Inspired by cooperative game theory, Shapley values offer a convenient way for quantifying the feature importance as explanations.
In particular, we introduce a novel sampling procedure for out-coalition features that respects the causal relations of the input features.
arXiv Detail & Related papers (2024-04-17T09:43:54Z) - LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations [1.0370398945228227]
We introduce LaPLACE-explainer, designed to provide probabilistic cause-and-effect explanations for machine learning models.
The LaPLACE-Explainer component leverages the concept of a Markov blanket to establish statistical boundaries between relevant and non-relevant features.
Our approach offers causal explanations and outperforms LIME and SHAP in terms of local accuracy and consistency of explained features.
arXiv Detail & Related papers (2023-10-01T04:09:59Z) - Evaluating Explainability in Machine Learning Predictions through Explainer-Agnostic Metrics [0.0]
We develop six distinct model-agnostic metrics designed to quantify the extent to which model predictions can be explained.
These metrics measure different aspects of model explainability, ranging from local importance, global importance, and surrogate predictions.
We demonstrate the practical utility of these metrics on classification and regression tasks, and integrate these metrics into an existing Python package for public use.
arXiv Detail & Related papers (2023-02-23T15:28:36Z) - Calibrating AI Models for Wireless Communications via Conformal
Prediction [55.47458839587949]
Conformal prediction is applied for the first time to the design of AI for communication systems.
This paper investigates the application of conformal prediction as a general framework to obtain AI models that produce decisions with formal calibration guarantees.
arXiv Detail & Related papers (2022-12-15T12:52:23Z) - UKP-SQuARE v2 Explainability and Adversarial Attacks for Trustworthy QA [47.8796570442486]
Question Answering systems are increasingly deployed in applications where they support real-world decisions.
Inherently interpretable models or post hoc explainability methods can help users to comprehend how a model arrives at its prediction.
We introduce SQuARE v2, the new version of SQuARE, to provide an explainability infrastructure for comparing models.
arXiv Detail & Related papers (2022-08-19T13:01:01Z) - VisFIS: Visual Feature Importance Supervision with
Right-for-the-Right-Reason Objectives [84.48039784446166]
We show that model FI supervision can meaningfully improve VQA model accuracy as well as performance on several Right-for-the-Right-Reason metrics.
Our best performing method, Visual Feature Importance Supervision (VisFIS), outperforms strong baselines on benchmark VQA datasets.
Predictions are more accurate when explanations are plausible and faithful, and not when they are plausible but not faithful.
arXiv Detail & Related papers (2022-06-22T17:02:01Z) - Automated Learning of Interpretable Models with Quantified Uncertainty [0.0]
We introduce a new framework for genetic-programming-based symbolic regression (GPSR)
GPSR uses model evidence to formulate replacement probability during the selection phase of evolution.
It is shown to increase interpretability, improve robustness to noise, and reduce overfitting when compared to a conventional GPSR implementation.
arXiv Detail & Related papers (2022-04-12T19:56:42Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - Consistent Sufficient Explanations and Minimal Local Rules for
explaining regression and classification models [0.0]
We extend the notion of probabilistic Sufficient Explanations (P-SE)
The crux of P-SE is to compute the conditional probability of maintaining the same prediction.
We deal with non-binary features, without learning the distribution of $X$ nor having the model for making predictions.
arXiv Detail & Related papers (2021-11-08T17:27:52Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.