The Impact of Explanations on AI Competency Prediction in VQA
- URL: http://arxiv.org/abs/2007.00900v1
- Date: Thu, 2 Jul 2020 06:11:28 GMT
- Title: The Impact of Explanations on AI Competency Prediction in VQA
- Authors: Kamran Alipour, Arijit Ray, Xiao Lin, Jurgen P. Schulze, Yi Yao,
Giedrius T. Burachas
- Abstract summary: We evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA)
We introduce an explainable VQA system that uses spatial and object features and is powered by the BERT language model.
- Score: 3.149760860038061
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainability is one of the key elements for building trust in AI systems.
Among numerous attempts to make AI explainable, quantifying the effect of
explanations remains a challenge in conducting human-AI collaborative tasks.
Aside from the ability to predict the overall behavior of AI, in many
applications, users need to understand an AI agent's competency in different
aspects of the task domain. In this paper, we evaluate the impact of
explanations on the user's mental model of AI agent competency within the task
of visual question answering (VQA). We quantify users' understanding of
competency, based on the correlation between the actual system performance and
user rankings. We introduce an explainable VQA system that uses spatial and
object features and is powered by the BERT language model. Each group of users
sees only one kind of explanation to rank the competencies of the VQA model.
The proposed model is evaluated through between-subject experiments to probe
explanations' impact on the user's perception of competency. The comparison
between two VQA models shows BERT based explanations and the use of object
features improve the user's prediction of the model's competencies.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Requirements for Explainability and Acceptance of Artificial
Intelligence in Collaborative Work [0.0]
The present structured literature analysis examines the requirements for the explainability and acceptance of AI.
Results indicate that the two main groups of users are developers who require information about the internal operations of the model.
The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations.
arXiv Detail & Related papers (2023-06-27T11:36:07Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Transcending XAI Algorithm Boundaries through End-User-Inspired Design [27.864338632191608]
Lacking explainability-focused functional support for end users may hinder the safe and responsible use of AI in high-stakes domains.
Our work shows that grounding the technical problem in end users' use of XAI can inspire new research questions.
Such end-user-inspired research questions have the potential to promote social good by democratizing AI and ensuring the responsible use of AI in critical domains.
arXiv Detail & Related papers (2022-08-18T09:44:51Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - QA2Explanation: Generating and Evaluating Explanations for Question
Answering Systems over Knowledge Graph [4.651476054353298]
We develop an automatic approach for generating explanations during various stages of a pipeline-based QA system.
Our approach is a supervised and automatic approach which considers three classes (i.e., success, no answer, and wrong answer) for annotating the output of involved QA components.
arXiv Detail & Related papers (2020-10-16T11:32:12Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - A Study on Multimodal and Interactive Explanations for Visual Question
Answering [3.086885687016963]
We evaluate multimodal explanations in the setting of a Visual Question Answering (VQA) task.
Results indicate that the explanations help improve human prediction accuracy, especially in trials when the VQA system's answer is inaccurate.
We introduce active attention, a novel method for evaluating causal attentional effects through intervention by editing attention maps.
arXiv Detail & Related papers (2020-03-01T07:54:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.