Causality-Inspired Taxonomy for Explainable Artificial Intelligence
- URL: http://arxiv.org/abs/2208.09500v2
- Date: Mon, 4 Mar 2024 16:39:15 GMT
- Title: Causality-Inspired Taxonomy for Explainable Artificial Intelligence
- Authors: Pedro C. Neto, Tiago Gon\c{c}alves, Jo\~ao Ribeiro Pinto, Wilson
Silva, Ana F. Sequeira, Arun Ross, Jaime S. Cardoso
- Abstract summary: We propose a novel causality-inspired framework for xAI that creates an environment for the development of xAI approaches.
We have analysed 81 research papers on a myriad of biometric modalities and different tasks.
- Score: 10.241230325171143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As two sides of the same coin, causality and explainable artificial
intelligence (xAI) were initially proposed and developed with different goals.
However, the latter can only be complete when seen through the lens of the
causality framework. As such, we propose a novel causality-inspired framework
for xAI that creates an environment for the development of xAI approaches. To
show its applicability, biometrics was used as case study. For this, we have
analysed 81 research papers on a myriad of biometric modalities and different
tasks. We have categorised each of these methods according to our novel xAI
Ladder and discussed the future directions of the field.
Related papers
- The role of causality in explainable artificial intelligence [1.049712834719005]
Causality and eXplainable Artificial Intelligence (XAI) have developed as separate fields in computer science.
We investigate the literature to try to understand how and to what extent causality and XAI are intertwined.
arXiv Detail & Related papers (2023-09-18T16:05:07Z) - A Theoretical Framework for AI Models Explainability with Application in
Biomedicine [3.5742391373143474]
We propose a novel definition of explanation that is a synthesis of what can be found in the literature.
We fit explanations into the properties of faithfulness (i.e., the explanation being a true description of the model's inner workings and decision-making process) and plausibility (i.e., how much the explanation looks convincing to the user)
arXiv Detail & Related papers (2022-12-29T20:05:26Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - INTERACTION: A Generative XAI Framework for Natural Language Inference
Explanations [58.062003028768636]
Current XAI approaches only focus on delivering a single explanation.
This paper proposes a generative XAI framework, INTERACTION (explaIn aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder)
Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation.
arXiv Detail & Related papers (2022-09-02T13:52:39Z) - Understanding Narratives through Dimensions of Analogy [17.68704739786042]
Analogical reasoning is a powerful tool that enables humans to connect two situations, and to generalize their knowledge from familiar to novel situations.
Modern scalable AI techniques with the potential to reason by analogy have been only applied to the special case of proportional analogy.
In this paper, we aim to bridge the gap by: 1) formalizing six dimensions of analogy based on mature insights from Cognitive Science research, 2) annotating a corpus of fables with each of these dimensions, and 3) defining four tasks with increasing complexity that enable scalable evaluation of AI techniques.
arXiv Detail & Related papers (2022-06-14T20:56:26Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Argumentative XAI: A Survey [15.294433619347082]
We overview XAI approaches built using methods from the field of computational argumentation.
We focus on different types of explanation (intrinsic and post-hoc), different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use.
arXiv Detail & Related papers (2021-05-24T13:32:59Z) - Opportunities and Challenges in Explainable Artificial Intelligence
(XAI): A Survey [2.7086321720578623]
Black-box nature of deep neural networks challenges its use in mission critical applications.
XAI promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions.
arXiv Detail & Related papers (2020-06-16T02:58:10Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.