A Study on Multimodal and Interactive Explanations for Visual Question
Answering
- URL: http://arxiv.org/abs/2003.00431v1
- Date: Sun, 1 Mar 2020 07:54:01 GMT
- Title: A Study on Multimodal and Interactive Explanations for Visual Question
Answering
- Authors: Kamran Alipour, Jurgen P. Schulze, Yi Yao, Avi Ziskind, Giedrius
Burachas
- Abstract summary: We evaluate multimodal explanations in the setting of a Visual Question Answering (VQA) task.
Results indicate that the explanations help improve human prediction accuracy, especially in trials when the VQA system's answer is inaccurate.
We introduce active attention, a novel method for evaluating causal attentional effects through intervention by editing attention maps.
- Score: 3.086885687016963
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainability and interpretability of AI models is an essential factor
affecting the safety of AI. While various explainable AI (XAI) approaches aim
at mitigating the lack of transparency in deep networks, the evidence of the
effectiveness of these approaches in improving usability, trust, and
understanding of AI systems are still missing. We evaluate multimodal
explanations in the setting of a Visual Question Answering (VQA) task, by
asking users to predict the response accuracy of a VQA agent with and without
explanations. We use between-subjects and within-subjects experiments to probe
explanation effectiveness in terms of improving user prediction accuracy,
confidence, and reliance, among other factors. The results indicate that the
explanations help improve human prediction accuracy, especially in trials when
the VQA system's answer is inaccurate. Furthermore, we introduce active
attention, a novel method for evaluating causal attentional effects through
intervention by editing attention maps. User explanation ratings are strongly
correlated with human prediction accuracy and suggest the efficacy of these
explanations in human-machine AI collaboration tasks.
Related papers
- To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems [11.690126756498223]
Vision for optimal human-AI collaboration requires 'appropriate reliance' of humans on AI systems.
In practice, the performance disparity of machine learning models on out-of-distribution data makes dataset-specific performance feedback unreliable.
arXiv Detail & Related papers (2024-09-22T09:43:27Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Understanding the Effect of Counterfactual Explanations on Trust and
Reliance on AI for Human-AI Collaborative Clinical Decision Making [5.381004207943597]
We conducted an experiment with seven therapists and ten laypersons on the task of assessing post-stroke survivors' quality of motion.
We analyzed their performance, agreement level on the task, and reliance on AI without and with two types of AI explanations.
Our work discusses the potential of counterfactual explanations to better estimate the accuracy of an AI model and reduce over-reliance on wrong' AI outputs.
arXiv Detail & Related papers (2023-08-08T16:23:46Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Understanding the Effect of Out-of-distribution Examples and Interactive
Explanations on Human-AI Decision Making [19.157591744997355]
We argue that the typical experimental setup limits the potential of human-AI teams.
We develop novel interfaces to support interactive explanations so that humans can actively engage with AI assistance.
arXiv Detail & Related papers (2021-01-13T19:01:32Z) - The Impact of Explanations on AI Competency Prediction in VQA [3.149760860038061]
We evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA)
We introduce an explainable VQA system that uses spatial and object features and is powered by the BERT language model.
arXiv Detail & Related papers (2020-07-02T06:11:28Z) - Does Explainable Artificial Intelligence Improve Human Decision-Making? [17.18994675838646]
We compare and evaluate objective human decision accuracy without AI (control), with an AI prediction (no explanation) and AI prediction with explanation.
We find any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact.
Our results indicate that, at least in some situations, the "why" information provided in explainable AI may not enhance user decision-making.
arXiv Detail & Related papers (2020-06-19T15:46:13Z) - Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals [53.484562601127195]
We point out the inability to infer behavioral conclusions from probing results.
We offer an alternative method that focuses on how the information is being used, rather than on what information is encoded.
arXiv Detail & Related papers (2020-06-01T15:00:11Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.