Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
- URL: http://arxiv.org/abs/2409.12809v1
- Date: Thu, 19 Sep 2024 14:34:20 GMT
- Title: Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
- Authors: Philipp Spitzer, Joshua Holstein, Katelyn Morrison, Kenneth Holstein, Gerhard Satzger, Niklas Kühl,
- Abstract summary: We run a study on AI-assisted decision-making in which humans were supported by XAI.
Our findings reveal a misinformation effect when incorrect explanations accompany correct AI advice.
This effect causes humans to infer flawed reasoning strategies, hindering task execution and demonstrating impaired procedural knowledge.
- Score: 11.824688232910193
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Across various applications, humans increasingly use black-box artificial intelligence (AI) systems without insight into these systems' reasoning. To counter this opacity, explainable AI (XAI) methods promise enhanced transparency and interpretability. While recent studies have explored how XAI affects human-AI collaboration, few have examined the potential pitfalls caused by incorrect explanations. The implications for humans can be far-reaching but have not been explored extensively. To investigate this, we ran a study (n=160) on AI-assisted decision-making in which humans were supported by XAI. Our findings reveal a misinformation effect when incorrect explanations accompany correct AI advice with implications post-collaboration. This effect causes humans to infer flawed reasoning strategies, hindering task execution and demonstrating impaired procedural knowledge. Additionally, incorrect explanations compromise human-AI team-performance during collaboration. With our work, we contribute to HCI by providing empirical evidence for the negative consequences of incorrect explanations on humans post-collaboration and outlining guidelines for designers of AI.
Related papers
- Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks [45.23431596135002]
This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task.
Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved.
arXiv Detail & Related papers (2024-11-15T13:22:04Z) - Unraveling the Dilemma of AI Errors: Exploring the Effectiveness of Human and Machine Explanations for Large Language Models [8.863857300695667]
We analyzed 156 human-generated text and saliency-based explanations in a question-answering task.
Our findings show that participants found human saliency maps to be more helpful in explaining AI answers than machine saliency maps.
This finding hints at the dilemma of AI errors in explanation, where helpful explanations can lead to lower task performance when they support wrong AI predictions.
arXiv Detail & Related papers (2024-04-11T13:16:51Z) - The Impact of Imperfect XAI on Human-AI Decision-Making [8.305869611846775]
We evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task.
Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.
arXiv Detail & Related papers (2023-07-25T15:19:36Z) - Improving Human-AI Collaboration With Descriptions of AI Behavior [14.904401331154062]
People work with AI systems to improve their decision making, but often under- or over-rely on AI predictions and perform worse than they would have unassisted.
To help people appropriately rely on AI aids, we propose showing them behavior descriptions.
arXiv Detail & Related papers (2023-01-06T00:33:08Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Does the Whole Exceed its Parts? The Effect of AI Explanations on
Complementary Team Performance [44.730580857733]
Prior studies observed improvements from explanations only when the AI, alone, outperformed both the human and the best team.
We conduct mixed-method user studies on three datasets, where an AI with accuracy comparable to humans helps participants solve a task.
We find explanations increase the chance that humans will accept the AI's recommendation, regardless of its correctness.
arXiv Detail & Related papers (2020-06-26T03:34:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.