Towards Relatable Explainable AI with the Perceptual Process
- URL: http://arxiv.org/abs/2112.14005v1
- Date: Tue, 28 Dec 2021 05:48:53 GMT
- Title: Towards Relatable Explainable AI with the Perceptual Process
- Authors: Wencan Zhang, Brian Y. Lim
- Abstract summary: We argue that explanations must be more relatable to other concepts, hypotheticals, and associations.
Inspired by cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI.
- Score: 5.581885362337179
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models need to provide contrastive explanations, since
people often seek to understand why a puzzling prediction occurred instead of
some expected outcome. Current contrastive explanations are rudimentary
comparisons between examples or raw features, which remain difficult to
interpret, since they lack semantic meaning. We argue that explanations must be
more relatable to other concepts, hypotheticals, and associations. Inspired by
the perceptual process from cognitive psychology, we propose the XAI Perceptual
Processing Framework and RexNet model for relatable explainable AI with
Contrastive Saliency, Counterfactual Synthetic, and Contrastive Cues
explanations. We investigated the application of vocal emotion recognition, and
implemented a modular multi-task deep neural network to predict and explain
emotions from speech. From think-aloud and controlled studies, we found that
counterfactual explanations were useful and further enhanced with semantic
cues, but not saliency explanations. This work provides insights into providing
and evaluating relatable contrastive explainable AI for perception
applications.
Related papers
- May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability [17.052366688978935]
We investigate if free-form conversations can enhance users' comprehension of static explanations.
We measure the effect of the conversation on participants' ability to choose from three machine learning models.
Our findings highlight the importance of customized model explanations in the format of free-form conversations.
arXiv Detail & Related papers (2023-09-25T09:00:38Z) - Exploring Effectiveness of Explanations for Appropriate Trust: Lessons
from Cognitive Psychology [3.1945067016153423]
This work draws inspiration from findings in cognitive psychology to understand how effective explanations can be designed.
We identify four components to which explanation designers can pay special attention: perception, semantics, intent, and user & context.
We propose that the significant challenge for effective AI explanations is an additional step between explanation generation using algorithms not producing interpretable explanations and explanation communication.
arXiv Detail & Related papers (2022-10-05T13:40:01Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
Human Trust in Image Recognition Models [84.32751938563426]
We propose a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN)
In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process.
Our framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.
arXiv Detail & Related papers (2021-09-03T09:46:20Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Expressive Explanations of DNNs by Combining Concept Analysis with ILP [0.3867363075280543]
We use inherent features learned by the network to build a global, expressive, verbal explanation of the rationale of a feed-forward convolutional deep neural network (DNN)
We show that our explanation is faithful to the original black-box model.
arXiv Detail & Related papers (2021-05-16T07:00:27Z) - Semantics and explanation: why counterfactual explanations produce
adversarial examples in deep neural networks [15.102346715690759]
Recent papers in explainable AI have made a compelling case for counterfactual modes of explanation.
While counterfactual explanations appear to be extremely effective in some instances, they are formally equivalent to adversarial examples.
This presents an apparent paradox for explainability researchers: if these two procedures are formally equivalent, what accounts for the explanatory divide apparent between counterfactual explanations and adversarial examples?
We resolve this paradox by placing emphasis back on the semantics of counterfactual expressions.
arXiv Detail & Related papers (2020-12-18T07:04:04Z) - Explainable AI without Interpretable Model [0.0]
It has become more important than ever that AI systems would be able to explain the reasoning behind their results to end-users.
Most Explainable AI (XAI) methods are based on extracting an interpretable model that can be used for producing explanations.
The notions of Contextual Importance and Utility (CIU) presented in this paper make it possible to produce human-like explanations of black-box outcomes directly.
arXiv Detail & Related papers (2020-09-29T13:29:44Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.