Explanatory Pluralism in Explainable AI
- URL: http://arxiv.org/abs/2106.13976v1
- Date: Sat, 26 Jun 2021 09:02:06 GMT
- Title: Explanatory Pluralism in Explainable AI
- Authors: Yiheng Yao
- Abstract summary: I chart a taxonomy of types of explanation and the associated XAI methods that can address them.
When we look to expose the inner mechanisms of AI models, we produce Diagnostic-explanations.
When we wish to form stable generalizations of our models, we produce Expectation-explanations.
Finally, when we want to justify the usage of a model, we produce Role-explanations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The increasingly widespread application of AI models motivates increased
demand for explanations from a variety of stakeholders. However, this demand is
ambiguous because there are many types of 'explanation' with different
evaluative criteria. In the spirit of pluralism, I chart a taxonomy of types of
explanation and the associated XAI methods that can address them. When we look
to expose the inner mechanisms of AI models, we develop
Diagnostic-explanations. When we seek to render model output understandable, we
produce Explication-explanations. When we wish to form stable generalizations
of our models, we produce Expectation-explanations. Finally, when we want to
justify the usage of a model, we produce Role-explanations that situate models
within their social context. The motivation for such a pluralistic view stems
from a consideration of causes as manipulable relationships and the different
types of explanations as identifying the relevant points in AI systems we can
intervene upon to affect our desired changes. This paper reduces the ambiguity
in use of the word 'explanation' in the field of XAI, allowing practitioners
and stakeholders a useful template for avoiding equivocation and evaluating XAI
methods and putative explanations.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - What's meant by explainable model: A Scoping Review [0.38252451346419336]
This paper investigates whether the term explainable model is adopted by authors under the assumption that incorporating a post-hoc XAI method suffices to characterize a model as explainable.
We found that 81% of the application papers that refer to their approaches as an explainable model do not conduct any form of evaluation on the XAI method they used.
arXiv Detail & Related papers (2023-07-18T22:55:04Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Explaining Causal Models with Argumentation: the Case of Bi-variate
Reinforcement [15.947501347927687]
We introduce a conceptualisation for generating argumentation frameworks (AFs) from causal models.
The conceptualisation is based on reinterpreting desirable properties of semantics of AFs as explanation moulds.
We perform a theoretical evaluation of these argumentative explanations, examining whether they satisfy a range of desirable explanatory and argumentative properties.
arXiv Detail & Related papers (2022-05-23T19:39:51Z) - CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
Human Trust in Image Recognition Models [84.32751938563426]
We propose a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN)
In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process.
Our framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.
arXiv Detail & Related papers (2021-09-03T09:46:20Z) - Do not explain without context: addressing the blind spot of model
explanations [2.280298858971133]
This paper highlights a blind spot which is often overlooked when monitoring and auditing machine learning models.
We discuss that many model explanations depend directly or indirectly on the choice of the referenced data distribution.
We showcase examples where small changes in the distribution lead to drastic changes in the explanations, such as a change in trend or, alarmingly, a conclusion.
arXiv Detail & Related papers (2021-05-28T12:48:40Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - The Grammar of Interactive Explanatory Model Analysis [7.812073412066698]
We show how different Explanatory Model Analysis (EMA) methods complement each other.
We formalize the grammar of IEMA to describe potential human-model dialogues.
IEMA is implemented in a widely used human-centered open-source software framework.
arXiv Detail & Related papers (2020-05-01T17:12:22Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - The Pragmatic Turn in Explainable Artificial Intelligence (XAI) [0.0]
I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI.
I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post-hoc interpretability.
arXiv Detail & Related papers (2020-02-22T01:40:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.