Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI
Research
- URL: http://arxiv.org/abs/2312.12119v1
- Date: Tue, 19 Dec 2023 12:49:32 GMT
- Title: Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI
Research
- Authors: Susanne Hindennach, Lei Shi, Filip Mileti\'c and Andreas Bulling
- Abstract summary: We analyse 3,533 explainable AI (XAI) research articles from the Semantic Scholar Open Research Corpus (S2ORC)
We identify three dominant types of mind attribution: metaphorical (e.g. "to learn" or "to predict"), awareness (e.g. "to consider"), and (3) agency.
We find that participants who were given a mind-attributing explanation were more likely to rate the AI system as aware of the harm it caused.
Considering the AI experts' involvement lead to reduced ratings of AI responsibility for participants who were given a non-mind-attributing
- Score: 10.705827568946606
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: When users perceive AI systems as mindful, independent agents, they hold them
responsible instead of the AI experts who created and designed these systems.
So far, it has not been studied whether explanations support this shift in
responsibility through the use of mind-attributing verbs like "to think". To
better understand the prevalence of mind-attributing explanations we analyse AI
explanations in 3,533 explainable AI (XAI) research articles from the Semantic
Scholar Open Research Corpus (S2ORC). Using methods from semantic shift
detection, we identify three dominant types of mind attribution: (1)
metaphorical (e.g. "to learn" or "to predict"), (2) awareness (e.g. "to
consider"), and (3) agency (e.g. "to make decisions"). We then analyse the
impact of mind-attributing explanations on awareness and responsibility in a
vignette-based experiment with 199 participants. We find that participants who
were given a mind-attributing explanation were more likely to rate the AI
system as aware of the harm it caused. Moreover, the mind-attributing
explanation had a responsibility-concealing effect: Considering the AI experts'
involvement lead to reduced ratings of AI responsibility for participants who
were given a non-mind-attributing or no explanation. In contrast, participants
who read the mind-attributing explanation still held the AI system responsible
despite considering the AI experts' involvement. Taken together, our work
underlines the need to carefully phrase explanations about AI systems in
scientific writing to reduce mind attribution and clearly communicate human
responsibility.
Related papers
- Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - Understanding the Role of Human Intuition on Reliance in Human-AI
Decision-Making with Explanations [44.01143305912054]
We study how decision-makers' intuition affects their use of AI predictions and explanations.
Our results identify three types of intuition involved in reasoning about AI predictions and explanations.
We use these pathways to explain why feature-based explanations did not improve participants' decision outcomes and increased their overreliance on AI.
arXiv Detail & Related papers (2023-01-18T01:33:50Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Alterfactual Explanations -- The Relevance of Irrelevance for Explaining
AI Systems [0.9542023122304099]
We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system.
Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered.
We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods.
arXiv Detail & Related papers (2022-07-19T16:20:37Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Playing the Blame Game with Robots [0.0]
We find that people are willing to ascribe moral blame to AI systems in contexts of recklessness.
The higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system.
arXiv Detail & Related papers (2021-02-08T20:53:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.