The role of causality in explainable artificial intelligence
- URL: http://arxiv.org/abs/2309.09901v1
- Date: Mon, 18 Sep 2023 16:05:07 GMT
- Title: The role of causality in explainable artificial intelligence
- Authors: Gianluca Carloni, Andrea Berti, Sara Colantonio
- Abstract summary: Causality and eXplainable Artificial Intelligence (XAI) have developed as separate fields in computer science.
We investigate the literature to try to understand how and to what extent causality and XAI are intertwined.
- Score: 1.049712834719005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causality and eXplainable Artificial Intelligence (XAI) have developed as
separate fields in computer science, even though the underlying concepts of
causation and explanation share common ancient roots. This is further enforced
by the lack of review works jointly covering these two fields. In this paper,
we investigate the literature to try to understand how and to what extent
causality and XAI are intertwined. More precisely, we seek to uncover what
kinds of relationships exist between the two concepts and how one can benefit
from them, for instance, in building trust in AI systems. As a result, three
main perspectives are identified. In the first one, the lack of causality is
seen as one of the major limitations of current AI and XAI approaches, and the
"optimal" form of explanations is investigated. The second is a pragmatic
perspective and considers XAI as a tool to foster scientific exploration for
causal inquiry, via the identification of pursue-worthy experimental
manipulations. Finally, the third perspective supports the idea that causality
is propaedeutic to XAI in three possible manners: exploiting concepts borrowed
from causality to support or improve XAI, utilizing counterfactuals for
explainability, and considering accessing a causal model as explaining itself.
To complement our analysis, we also provide relevant software solutions used to
automate causal tasks. We believe our work provides a unified view of the two
fields of causality and XAI by highlighting potential domain bridges and
uncovering possible limitations.
Related papers
- Dataset | Mindset = Explainable AI | Interpretable AI [36.001670039529586]
"explainable" Artificial Intelligence (XAI)" and "interpretable AI (IAI)" interchangeably when we apply various XAI tools for a given dataset to explain the reasons that underpin machine learning (ML) outputs.
We argue that XAI is a subset of IAI. The concept of IAI is beyond the sphere of a dataset. It includes the domain of a mindset.
We aim to clarify these notions and lay the foundation of XAI, IAI, EAI, and TAI for many practitioners and policymakers in future AI applications and research.
arXiv Detail & Related papers (2024-08-22T14:12:53Z) - Understanding XAI Through the Philosopher's Lens: A Historical Perspective [5.839350214184222]
We show that a gradual progression has independently occurred in both domains from logicaldeductive to statistical models of explanation.
Similar concepts have independently emerged in both such as, for example, the relation between explanation and understanding and the importance of pragmatic factors.
arXiv Detail & Related papers (2024-07-26T14:44:49Z) - Selective Explanations: Leveraging Human Input to Align Explainable AI [40.33998268146951]
We propose a general framework for generating selective explanations by leveraging human input on a small sample.
As a showcase, we use a decision-support task to explore selective explanations based on what the decision-maker would consider relevant to the decision task.
Our experiments demonstrate the promise of selective explanations in reducing over-reliance on AI.
arXiv Detail & Related papers (2023-01-23T19:00:02Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Causality-Inspired Taxonomy for Explainable Artificial Intelligence [10.241230325171143]
We propose a novel causality-inspired framework for xAI that creates an environment for the development of xAI approaches.
We have analysed 81 research papers on a myriad of biometric modalities and different tasks.
arXiv Detail & Related papers (2022-08-19T18:26:35Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Towards Causal Representation Learning [96.110881654479]
The two fields of machine learning and graphical causality arose and developed separately.
There is now cross-pollination and increasing interest in both fields to benefit from the advances of the other.
arXiv Detail & Related papers (2021-02-22T15:26:57Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.