Explainable AI without Interpretable Model
- URL: http://arxiv.org/abs/2009.13996v1
- Date: Tue, 29 Sep 2020 13:29:44 GMT
- Title: Explainable AI without Interpretable Model
- Authors: Kary Fr\"amling
- Abstract summary: It has become more important than ever that AI systems would be able to explain the reasoning behind their results to end-users.
Most Explainable AI (XAI) methods are based on extracting an interpretable model that can be used for producing explanations.
The notions of Contextual Importance and Utility (CIU) presented in this paper make it possible to produce human-like explanations of black-box outcomes directly.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainability has been a challenge in AI for as long as AI has existed. With
the recently increased use of AI in society, it has become more important than
ever that AI systems would be able to explain the reasoning behind their
results also to end-users in situations such as being eliminated from a
recruitment process or having a bank loan application refused by an AI system.
Especially if the AI system has been trained using Machine Learning, it tends
to contain too many parameters for them to be analysed and understood, which
has caused them to be called `black-box' systems. Most Explainable AI (XAI)
methods are based on extracting an interpretable model that can be used for
producing explanations. However, the interpretable model does not necessarily
map accurately to the original black-box model. Furthermore, the
understandability of interpretable models for an end-user remains questionable.
The notions of Contextual Importance and Utility (CIU) presented in this paper
make it possible to produce human-like explanations of black-box outcomes
directly, without creating an interpretable model. Therefore, CIU explanations
map accurately to the black-box model itself. CIU is completely model-agnostic
and can be used with any black-box system. In addition to feature importance,
the utility concept that is well-known in Decision Theory provides a new
dimension to explanations compared to most existing XAI methods. Finally, CIU
can produce explanations at any level of abstraction and using different
vocabularies and other means of interaction, which makes it possible to adjust
explanations and interaction according to the context and to the target users.
Related papers
- Relevant Irrelevance: Generating Alterfactual Explanations for Image Classifiers [11.200613814162185]
In this paper, we demonstrate the feasibility of alterfactual explanations for black box image classifiers.
We show for the first time that it is possible to apply this idea to black box models based on neural networks.
arXiv Detail & Related papers (2024-05-08T11:03:22Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Alterfactual Explanations -- The Relevance of Irrelevance for Explaining
AI Systems [0.9542023122304099]
We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system.
Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered.
We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods.
arXiv Detail & Related papers (2022-07-19T16:20:37Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Towards Relatable Explainable AI with the Perceptual Process [5.581885362337179]
We argue that explanations must be more relatable to other concepts, hypotheticals, and associations.
Inspired by cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI.
arXiv Detail & Related papers (2021-12-28T05:48:53Z) - Conceptual Modeling and Artificial Intelligence: Mutual Benefits from
Complementary Worlds [0.0]
We are interested in tackling the intersection of the two, thus far, mostly isolated approached disciplines of CM and AI.
The workshop embraces the assumption, that manifold mutual benefits can be realized by i) investigating what Conceptual Modeling (CM) can contribute to AI, and ii) the other way around.
arXiv Detail & Related papers (2021-10-16T18:42:09Z) - CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
Human Trust in Image Recognition Models [84.32751938563426]
We propose a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN)
In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process.
Our framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.
arXiv Detail & Related papers (2021-09-03T09:46:20Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.