The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
- URL: http://arxiv.org/abs/2107.13509v2
- Date: Tue, 5 Mar 2024 20:33:44 GMT
- Title: The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
- Authors: Upol Ehsan, Samir Passi, Q. Vera Liao, Larry Chan, I-Hsiang Lee,
Michael Muller, Mark O. Riedl
- Abstract summary: We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
- Score: 61.49776160925216
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Explainability of AI systems is critical for users to take informed actions.
Understanding "who" opens the black-box of AI is just as important as opening
it. We conduct a mixed-methods study of how two different groups--people with
and without AI background--perceive different types of AI explanations.
Quantitatively, we share user perceptions along five dimensions. Qualitatively,
we describe how AI background can influence interpretations, elucidating the
differences through lenses of appropriation and cognitive heuristics. We find
that (1) both groups showed unwarranted faith in numbers for different reasons
and (2) each group found value in different explanations beyond their intended
design. Carrying critical implications for the field of XAI, our findings
showcase how AI generated explanations can have negative consequences despite
best intentions and how that could lead to harmful manipulation of trust. We
propose design interventions to mitigate them.
Related papers
- Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical Decision-Support Setting [43.110187812734864]
We evaluate three types of explanations: visual explanations (saliency maps), natural language explanations, and a combination of both modalities.
We find that text-based explanations lead to significant over-reliance, which is alleviated by combining them with saliency maps.
We also observe that the quality of explanations, that is, how much factually correct information they entail, and how much this aligns with AI correctness, significantly impacts the usefulness of the different explanation types.
arXiv Detail & Related papers (2024-10-16T06:43:02Z) - Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills [24.04643864795939]
People's decision-making abilities often fail to improve when they rely on AI for decision-support.
Most AI systems offer "unilateral" explanations that justify the AI's decision but do not account for users' thinking.
We introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice.
arXiv Detail & Related papers (2024-10-05T18:21:04Z) - Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration [11.824688232910193]
We run a study on AI-assisted decision-making in which humans were supported by XAI.
Our findings reveal a misinformation effect when incorrect explanations accompany correct AI advice.
This effect causes humans to infer flawed reasoning strategies, hindering task execution and demonstrating impaired procedural knowledge.
arXiv Detail & Related papers (2024-09-19T14:34:20Z) - Selective Explanations: Leveraging Human Input to Align Explainable AI [40.33998268146951]
We propose a general framework for generating selective explanations by leveraging human input on a small sample.
As a showcase, we use a decision-support task to explore selective explanations based on what the decision-maker would consider relevant to the decision task.
Our experiments demonstrate the promise of selective explanations in reducing over-reliance on AI.
arXiv Detail & Related papers (2023-01-23T19:00:02Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Alterfactual Explanations -- The Relevance of Irrelevance for Explaining
AI Systems [0.9542023122304099]
We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system.
Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered.
We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods.
arXiv Detail & Related papers (2022-07-19T16:20:37Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.