Axe the X in XAI: A Plea for Understandable AI
- URL: http://arxiv.org/abs/2403.00315v1
- Date: Fri, 1 Mar 2024 06:28:53 GMT
- Title: Axe the X in XAI: A Plea for Understandable AI
- Authors: Andr\'es P\'aez
- Abstract summary: I argue that the notion of explainability as it is currently used in the XAI literature bears little resemblance to the traditional concept of scientific explanation.
It would be more fruitful to use the label "understandable AI" to avoid the confusion that surrounds the goal and purposes of XAI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity
of the term "explanation" in explainable AI (XAI) can be solved by adopting any
of four different extant accounts of explanation in the philosophy of science:
the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New
Mechanist models. In this chapter, I show that the authors' claim that these
accounts can be applied to deep neural networks as they would to any natural
phenomenon is mistaken. I also provide a more general argument as to why the
notion of explainability as it is currently used in the XAI literature bears
little resemblance to the traditional concept of scientific explanation. It
would be more fruitful to use the label "understandable AI" to avoid the
confusion that surrounds the goal and purposes of XAI. In the second half of
the chapter, I argue for a pragmatic conception of understanding that is better
suited to play the central role attributed to explanation in XAI. Following
Kuorikoski & Ylikoski (2015), the conditions of satisfaction for understanding
an ML system are fleshed out in terms of an agent's success in using the
system, in drawing correct inferences from it.
Related papers
- Understanding XAI Through the Philosopher's Lens: A Historical Perspective [5.839350214184222]
We show that a gradual progression has independently occurred in both domains from logicaldeductive to statistical models of explanation.
Similar concepts have independently emerged in both such as, for example, the relation between explanation and understanding and the importance of pragmatic factors.
arXiv Detail & Related papers (2024-07-26T14:44:49Z) - Forms of Understanding of XAI-Explanations [2.887772793510463]
This article aims to present a model of forms of understanding in the context of Explainable Artificial Intelligence (XAI)
Two types of understanding are considered as possible outcomes of explanations, namely enabledness and comprehension.
Special challenges of understanding in XAI are discussed.
arXiv Detail & Related papers (2023-11-15T08:06:51Z) - A psychological theory of explainability [5.715103211247915]
We propose a theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation.
Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give.
arXiv Detail & Related papers (2022-05-17T15:52:24Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Making Things Explainable vs Explaining: Requirements and Challenges
under the GDPR [2.578242050187029]
ExplanatorY AI (YAI) builds over XAI with the goal to collect and organize explainable information.
We represent the problem of generating explanations for Automated Decision-Making systems (ADMs) into the identification of an appropriate path over an explanatory space.
arXiv Detail & Related papers (2021-10-02T08:48:47Z) - CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
Human Trust in Image Recognition Models [84.32751938563426]
We propose a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN)
In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process.
Our framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.
arXiv Detail & Related papers (2021-09-03T09:46:20Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Explainable AI without Interpretable Model [0.0]
It has become more important than ever that AI systems would be able to explain the reasoning behind their results to end-users.
Most Explainable AI (XAI) methods are based on extracting an interpretable model that can be used for producing explanations.
The notions of Contextual Importance and Utility (CIU) presented in this paper make it possible to produce human-like explanations of black-box outcomes directly.
arXiv Detail & Related papers (2020-09-29T13:29:44Z) - Machine Reasoning Explainability [100.78417922186048]
Machine Reasoning (MR) uses largely symbolic means to formalize and emulate abstract reasoning.
Studies in early MR have notably started inquiries into Explainable AI (XAI)
This document reports our work in-progress on MR explainability.
arXiv Detail & Related papers (2020-09-01T13:45:05Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.