"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI
- URL: http://arxiv.org/abs/2207.00007v1
- Date: Mon, 27 Jun 2022 21:42:53 GMT
- Title: "Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI
- Authors: Leilani H. Gilpin, Andrew R. Paley, Mohammed A. Alam, Sarah Spurlock,
Kristian J. Hammond
- Abstract summary: We explore the features of explanations and how to use those features in evaluating their utility.
We focus on the requirements for explanations defined by their functional role, the knowledge states of users who are trying to understand them, and the availability of the information needed to generate them.
- Score: 2.5899040911480173
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: There is broad agreement that Artificial Intelligence (AI) systems,
particularly those using Machine Learning (ML), should be able to "explain"
their behavior. Unfortunately, there is little agreement as to what constitutes
an "explanation." This has caused a disconnect between the explanations that
systems produce in service of explainable Artificial Intelligence (XAI) and
those explanations that users and other audiences actually need, which should
be defined by the full spectrum of functional roles, audiences, and
capabilities for explanation. In this paper, we explore the features of
explanations and how to use those features in evaluating their utility. We
focus on the requirements for explanations defined by their functional role,
the knowledge states of users who are trying to understand them, and the
availability of the information needed to generate them. Further, we discuss
the risk of XAI enabling trust in systems without establishing their
trustworthiness and define a critical next step for the field of XAI to
establish metrics to guide and ground the utility of system-generated
explanations.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Making Things Explainable vs Explaining: Requirements and Challenges
under the GDPR [2.578242050187029]
ExplanatorY AI (YAI) builds over XAI with the goal to collect and organize explainable information.
We represent the problem of generating explanations for Automated Decision-Making systems (ADMs) into the identification of an appropriate path over an explanatory space.
arXiv Detail & Related papers (2021-10-02T08:48:47Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Explanation Ontology: A Model of Explanations for User-Centered AI [3.1783442097247345]
Explanations have often added to an AI system in a non-principled, post-hoc manner.
With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration.
We design an explanation ontology to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types.
arXiv Detail & Related papers (2020-10-04T03:53:35Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Directions for Explainable Knowledge-Enabled Systems [3.7250420821969827]
We leverage our survey of explanation literature in Artificial Intelligence and closely related fields to generate a set of explanation types.
We define each type and provide an example question that would motivate the need for this style of explanation.
We believe this set of explanation types will help future system designers in their generation and prioritization of requirements.
arXiv Detail & Related papers (2020-03-17T04:34:29Z) - Foundations of Explainable Knowledge-Enabled Systems [3.7250420821969827]
We present a historical overview of explainable artificial intelligence systems.
We focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains.
We propose new definitions for explanations and explainable knowledge-enabled systems.
arXiv Detail & Related papers (2020-03-17T04:18:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.