Understanding XAI Through the Philosopher's Lens: A Historical Perspective
- URL: http://arxiv.org/abs/2407.18782v1
- Date: Fri, 26 Jul 2024 14:44:49 GMT
- Title: Understanding XAI Through the Philosopher's Lens: A Historical Perspective
- Authors: Martina Mattioli, Antonio Emanuele CinĂ , Marcello Pelillo,
- Abstract summary: We show that a gradual progression has independently occurred in both domains from logicaldeductive to statistical models of explanation.
Similar concepts have independently emerged in both such as, for example, the relation between explanation and understanding and the importance of pragmatic factors.
- Score: 5.839350214184222
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite explainable AI (XAI) has recently become a hot topic and several different approaches have been developed, there is still a widespread belief that it lacks a convincing unifying foundation. On the other hand, over the past centuries, the very concept of explanation has been the subject of extensive philosophical analysis in an attempt to address the fundamental question of "why" in the context of scientific law. However, this discussion has rarely been connected with XAI. This paper tries to fill in this gap and aims to explore the concept of explanation in AI through an epistemological lens. By comparing the historical development of both the philosophy of science and AI, an intriguing picture emerges. Specifically, we show that a gradual progression has independently occurred in both domains from logical-deductive to statistical models of explanation, thereby experiencing in both cases a paradigm shift from deterministic to nondeterministic and probabilistic causality. Interestingly, we also notice that similar concepts have independently emerged in both realms such as, for example, the relation between explanation and understanding and the importance of pragmatic factors. Our study aims to be the first step towards understanding the philosophical underpinnings of the notion of explanation in AI, and we hope that our findings will shed some fresh light on the elusive nature of XAI.
Related papers
- Interpretability Needs a New Paradigm [49.134097841837715]
Interpretability is the study of explaining models in understandable terms to humans.
At the core of this debate is how each paradigm ensures its explanations are faithful, i.e., true to the model's behavior.
This paper's position is that we should think about new paradigms while staying vigilant regarding faithfulness.
arXiv Detail & Related papers (2024-05-08T19:31:06Z) - Axe the X in XAI: A Plea for Understandable AI [0.0]
I argue that the notion of explainability as it is currently used in the XAI literature bears little resemblance to the traditional concept of scientific explanation.
It would be more fruitful to use the label "understandable AI" to avoid the confusion that surrounds the goal and purposes of XAI.
arXiv Detail & Related papers (2024-03-01T06:28:53Z) - Forms of Understanding of XAI-Explanations [2.887772793510463]
This article aims to present a model of forms of understanding in the context of Explainable Artificial Intelligence (XAI)
Two types of understanding are considered as possible outcomes of explanations, namely enabledness and comprehension.
Special challenges of understanding in XAI are discussed.
arXiv Detail & Related papers (2023-11-15T08:06:51Z) - The role of causality in explainable artificial intelligence [1.049712834719005]
Causality and eXplainable Artificial Intelligence (XAI) have developed as separate fields in computer science.
We investigate the literature to try to understand how and to what extent causality and XAI are intertwined.
arXiv Detail & Related papers (2023-09-18T16:05:07Z) - Adding Why to What? Analyses of an Everyday Explanation [0.0]
We investigated 20 game explanations using the theory as an analytical framework.
We found that explainers were focusing on the physical aspects of the game first (Architecture) and only later on aspects of the Relevance.
Shifting between addressing the two sides was justified by explanation goals, emerging misunderstandings, and the knowledge needs of the explainee.
arXiv Detail & Related papers (2023-08-08T11:17:22Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Argumentative XAI: A Survey [15.294433619347082]
We overview XAI approaches built using methods from the field of computational argumentation.
We focus on different types of explanation (intrinsic and post-hoc), different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use.
arXiv Detail & Related papers (2021-05-24T13:32:59Z) - Machine Reasoning Explainability [100.78417922186048]
Machine Reasoning (MR) uses largely symbolic means to formalize and emulate abstract reasoning.
Studies in early MR have notably started inquiries into Explainable AI (XAI)
This document reports our work in-progress on MR explainability.
arXiv Detail & Related papers (2020-09-01T13:45:05Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.