Exploring Effectiveness of Explanations for Appropriate Trust: Lessons
from Cognitive Psychology
- URL: http://arxiv.org/abs/2210.03737v1
- Date: Wed, 5 Oct 2022 13:40:01 GMT
- Title: Exploring Effectiveness of Explanations for Appropriate Trust: Lessons
from Cognitive Psychology
- Authors: Ruben S. Verhagen, Siddharth Mehrotra, Mark A. Neerincx, Catholijn M.
Jonker and Myrthe L. Tielman
- Abstract summary: This work draws inspiration from findings in cognitive psychology to understand how effective explanations can be designed.
We identify four components to which explanation designers can pay special attention: perception, semantics, intent, and user & context.
We propose that the significant challenge for effective AI explanations is an additional step between explanation generation using algorithms not producing interpretable explanations and explanation communication.
- Score: 3.1945067016153423
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid development of Artificial Intelligence (AI) requires developers and
designers of AI systems to focus on the collaboration between humans and
machines. AI explanations of system behavior and reasoning are vital for
effective collaboration by fostering appropriate trust, ensuring understanding,
and addressing issues of fairness and bias. However, various contextual and
subjective factors can influence an AI system explanation's effectiveness. This
work draws inspiration from findings in cognitive psychology to understand how
effective explanations can be designed. We identify four components to which
explanation designers can pay special attention: perception, semantics, intent,
and user & context. We illustrate the use of these four explanation components
with an example of estimating food calories by combining text with visuals,
probabilities with exemplars, and intent communication with both user and
context in mind. We propose that the significant challenge for effective AI
explanations is an additional step between explanation generation using
algorithms not producing interpretable explanations and explanation
communication. We believe this extra step will benefit from carefully
considering the four explanation components outlined in our work, which can
positively affect the explanation's effectiveness.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability [17.052366688978935]
We investigate if free-form conversations can enhance users' comprehension of static explanations.
We measure the effect of the conversation on participants' ability to choose from three machine learning models.
Our findings highlight the importance of customized model explanations in the format of free-form conversations.
arXiv Detail & Related papers (2023-09-25T09:00:38Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Towards Relatable Explainable AI with the Perceptual Process [5.581885362337179]
We argue that explanations must be more relatable to other concepts, hypotheticals, and associations.
Inspired by cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI.
arXiv Detail & Related papers (2021-12-28T05:48:53Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Cognitive Perspectives on Context-based Decisions and Explanations [0.0]
We show that the Contextual Importance and Utility method for XAI share an overlap with the current new wave of action-oriented predictive representational structures.
This has an influencing effect on explainable AI, where the goal is to provide explanations of computer decision-making for a human audience.
arXiv Detail & Related papers (2021-01-25T15:49:52Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.