Explanations of Black-Box Model Predictions by Contextual Importance and
Utility
- URL: http://arxiv.org/abs/2006.00199v1
- Date: Sat, 30 May 2020 06:49:50 GMT
- Title: Explanations of Black-Box Model Predictions by Contextual Importance and
Utility
- Authors: Sule Anjomshoae, Kary Fr\"amling and Amro Najjar
- Abstract summary: We present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations easily understandable by experts as well as novice users.
This method explains the prediction results without transforming the model into an interpretable one.
We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation.
- Score: 1.7188280334580195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The significant advances in autonomous systems together with an immensely
wider application domain have increased the need for trustable intelligent
systems. Explainable artificial intelligence is gaining considerable attention
among researchers and developers to address this requirement. Although there is
an increasing number of works on interpretable and transparent machine learning
algorithms, they are mostly intended for the technical users. Explanations for
the end-user have been neglected in many usable and practical applications. In
this work, we present the Contextual Importance (CI) and Contextual Utility
(CU) concepts to extract explanations that are easily understandable by experts
as well as novice users. This method explains the prediction results without
transforming the model into an interpretable one. We present an example of
providing explanations for linear and non-linear models to demonstrate the
generalizability of the method. CI and CU are numerical values that can be
represented to the user in visuals and natural language form to justify actions
and explain reasoning for individual instances, situations, and contexts. We
show the utility of explanations in car selection example and Iris flower
classification by presenting complete (i.e. the causes of an individual
prediction) and contrastive explanation (i.e. contrasting instance against the
instance of interest). The experimental results show the feasibility and
validity of the provided explanation methods.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Textual Explanations and Critiques in Recommendation Systems [8.406549970145846]
dissertation focuses on two fundamental challenges of addressing this need.
The first involves explanation generation in a scalable and data-driven manner.
The second challenge consists in making explanations actionable, and we refer to it as critiquing.
arXiv Detail & Related papers (2022-05-15T11:59:23Z) - Visual Abductive Reasoning [85.17040703205608]
Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
arXiv Detail & Related papers (2022-03-26T10:17:03Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Efficient computation of contrastive explanations [8.132423340684568]
We study the relation of contrastive and counterfactual explanations.
We propose a 2-phase algorithm for efficiently computing (plausible) positives of many standard machine learning models.
arXiv Detail & Related papers (2020-10-06T11:50:28Z) - Interpretable Representations in Explainable AI: From Theory to Practice [7.031336702345381]
Interpretable representations are the backbone of many explainers that target black-box predictive systems.
We study properties of interpretable representations that encode presence and absence of human-comprehensible concepts.
arXiv Detail & Related papers (2020-08-16T21:44:03Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.