One Explanation Does Not Fit All: The Promise of Interactive
Explanations for Machine Learning Transparency
- URL: http://arxiv.org/abs/2001.09734v1
- Date: Mon, 27 Jan 2020 13:10:12 GMT
- Title: One Explanation Does Not Fit All: The Promise of Interactive
Explanations for Machine Learning Transparency
- Authors: Kacper Sokol and Peter Flach
- Abstract summary: We discuss the promises of Interactive Machine Learning for improved transparency of black-box systems.
We show how to personalise counterfactual explanations by interactively adjusting their conditional statements.
We argue that adjusting the explanation itself and its content is more important.
- Score: 21.58324172085553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The need for transparency of predictive systems based on Machine Learning
algorithms arises as a consequence of their ever-increasing proliferation in
the industry. Whenever black-box algorithmic predictions influence human
affairs, the inner workings of these algorithms should be scrutinised and their
decisions explained to the relevant stakeholders, including the system
engineers, the system's operators and the individuals whose case is being
decided. While a variety of interpretability and explainability methods is
available, none of them is a panacea that can satisfy all diverse expectations
and competing objectives that might be required by the parties involved. We
address this challenge in this paper by discussing the promises of Interactive
Machine Learning for improved transparency of black-box systems using the
example of contrastive explanations -- a state-of-the-art approach to
Interpretable Machine Learning.
Specifically, we show how to personalise counterfactual explanations by
interactively adjusting their conditional statements and extract additional
explanations by asking follow-up "What if?" questions. Our experience in
building, deploying and presenting this type of system allowed us to list
desired properties as well as potential limitations, which can be used to guide
the development of interactive explainers. While customising the medium of
interaction, i.e., the user interface comprising of various communication
channels, may give an impression of personalisation, we argue that adjusting
the explanation itself and its content is more important. To this end,
properties such as breadth, scope, context, purpose and target of the
explanation have to be considered, in addition to explicitly informing the
explainee about its limitations and caveats...
Related papers
- What and How of Machine Learning Transparency: Building Bespoke
Explainability Tools with Interoperable Algorithmic Components [77.87794937143511]
This paper introduces a collection of hands-on training materials for explaining data-driven predictive models.
These resources cover the three core building blocks of this technique: interpretable representation composition, data sampling and explanation generation.
arXiv Detail & Related papers (2022-09-08T13:33:25Z) - Explainable Predictive Process Monitoring: A User Evaluation [62.41400549499849]
Explainability is motivated by the lack of transparency of black-box Machine Learning approaches.
We carry on a user evaluation on explanation approaches for Predictive Process Monitoring.
arXiv Detail & Related papers (2022-02-15T22:24:21Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Expressive Explanations of DNNs by Combining Concept Analysis with ILP [0.3867363075280543]
We use inherent features learned by the network to build a global, expressive, verbal explanation of the rationale of a feed-forward convolutional deep neural network (DNN)
We show that our explanation is faithful to the original black-box model.
arXiv Detail & Related papers (2021-05-16T07:00:27Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Counterfactual Explanations for Machine Learning: A Review [5.908471365011942]
We review and categorize research on counterfactual explanations in machine learning.
Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries.
arXiv Detail & Related papers (2020-10-20T20:08:42Z) - Interpretable Representations in Explainable AI: From Theory to Practice [7.031336702345381]
Interpretable representations are the backbone of many explainers that target black-box predictive systems.
We study properties of interpretable representations that encode presence and absence of human-comprehensible concepts.
arXiv Detail & Related papers (2020-08-16T21:44:03Z) - Machine Learning Explainability for External Stakeholders [27.677158604772238]
There have been growing calls to open the black box and to make machine learning algorithms more explainable.
We conducted a day-long workshop with academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability.
We provide a short summary of various case studies of explainable machine learning, lessons from those studies, and discuss open challenges.
arXiv Detail & Related papers (2020-07-10T14:27:06Z) - Explanations of Black-Box Model Predictions by Contextual Importance and
Utility [1.7188280334580195]
We present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations easily understandable by experts as well as novice users.
This method explains the prediction results without transforming the model into an interpretable one.
We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation.
arXiv Detail & Related papers (2020-05-30T06:49:50Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.