The Illusion of Competence: Evaluating the Effect of Explanations on Users' Mental Models of Visual Question Answering Systems
- URL: http://arxiv.org/abs/2406.19170v2
- Date: Mon, 21 Oct 2024 07:17:20 GMT
- Title: The Illusion of Competence: Evaluating the Effect of Explanations on Users' Mental Models of Visual Question Answering Systems
- Authors: Judith Sieker, Simeon Junker, Ronja Utescher, Nazia Attari, Heiko Wersing, Hendrik Buschmeier, Sina Zarrieß,
- Abstract summary: We examine how users perceive the limitations of an AI system when it encounters a task that it cannot perform perfectly.
We employ a visual question answer and explanation task where we control the AI system's limitations by manipulating the visual inputs.
Our goal is to determine whether participants can perceive the limitations of the system.
- Score: 6.307898834231964
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We examine how users perceive the limitations of an AI system when it encounters a task that it cannot perform perfectly and whether providing explanations alongside its answers aids users in constructing an appropriate mental model of the system's capabilities and limitations. We employ a visual question answer and explanation task where we control the AI system's limitations by manipulating the visual inputs: during inference, the system either processes full-color or grayscale images. Our goal is to determine whether participants can perceive the limitations of the system. We hypothesize that explanations will make limited AI capabilities more transparent to users. However, our results show that explanations do not have this effect. Instead of allowing users to more accurately assess the limitations of the AI system, explanations generally increase users' perceptions of the system's competence - regardless of its actual performance.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Learning User-Interpretable Descriptions of Black-Box AI System
Capabilities [9.608555640607731]
This paper presents an approach for learning user-interpretable symbolic descriptions of the limits and capabilities of a black-box AI system.
It uses a hierarchical active querying paradigm to generate questions and to learn a user-interpretable model of the AI system based on its responses.
arXiv Detail & Related papers (2021-07-28T23:33:31Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Explainable Artificial Intelligence (XAI) for Increasing User Trust in
Deep Reinforcement Learning Driven Autonomous Systems [0.8701566919381223]
We offer an explainable artificial intelligence (XAI) framework that provides a three-fold explanation.
We created a user-interface for our XAI framework and evaluated its efficacy via a human-user experiment.
arXiv Detail & Related papers (2021-06-07T16:38:43Z) - QA2Explanation: Generating and Evaluating Explanations for Question
Answering Systems over Knowledge Graph [4.651476054353298]
We develop an automatic approach for generating explanations during various stages of a pipeline-based QA system.
Our approach is a supervised and automatic approach which considers three classes (i.e., success, no answer, and wrong answer) for annotating the output of involved QA components.
arXiv Detail & Related papers (2020-10-16T11:32:12Z) - The Impact of Explanations on AI Competency Prediction in VQA [3.149760860038061]
We evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA)
We introduce an explainable VQA system that uses spatial and object features and is powered by the BERT language model.
arXiv Detail & Related papers (2020-07-02T06:11:28Z) - Don't Explain without Verifying Veracity: An Evaluation of Explainable
AI with Video Activity Recognition [24.10997778856368]
This paper explores how explanation veracity affects user performance and agreement in intelligent systems.
We compare variations in explanation veracity for a video review and querying task.
Results suggest that low veracity explanations significantly decrease user performance and agreement.
arXiv Detail & Related papers (2020-05-05T17:06:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.