Don't Explain without Verifying Veracity: An Evaluation of Explainable
AI with Video Activity Recognition
- URL: http://arxiv.org/abs/2005.02335v1
- Date: Tue, 5 May 2020 17:06:46 GMT
- Title: Don't Explain without Verifying Veracity: An Evaluation of Explainable
AI with Video Activity Recognition
- Authors: Mahsan Nourani, Chiradeep Roy, Tahrima Rahman, Eric D. Ragan, Nicholas
Ruozzi, Vibhav Gogate
- Abstract summary: This paper explores how explanation veracity affects user performance and agreement in intelligent systems.
We compare variations in explanation veracity for a video review and querying task.
Results suggest that low veracity explanations significantly decrease user performance and agreement.
- Score: 24.10997778856368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable machine learning and artificial intelligence models have been
used to justify a model's decision-making process. This added transparency aims
to help improve user performance and understanding of the underlying model.
However, in practice, explainable systems face many open questions and
challenges. Specifically, designers might reduce the complexity of deep
learning models in order to provide interpretability. The explanations
generated by these simplified models, however, might not accurately justify and
be truthful to the model. This can further add confusion to the users as they
might not find the explanations meaningful with respect to the model
predictions. Understanding how these explanations affect user behavior is an
ongoing challenge. In this paper, we explore how explanation veracity affects
user performance and agreement in intelligent systems. Through a controlled
user study with an explainable activity recognition system, we compare
variations in explanation veracity for a video review and querying task. The
results suggest that low veracity explanations significantly decrease user
performance and agreement compared to both accurate explanations and a system
without explanations. These findings demonstrate the importance of accurate and
understandable explanations and caution that poor explanations can sometimes be
worse than no explanations with respect to their effect on user performance and
reliance on an AI system.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Causalainer: Causal Explainer for Automatic Video Summarization [77.36225634727221]
In many application scenarios, improper video summarization can have a large impact.
Modeling explainability is a key concern.
A Causal Explainer, dubbed Causalainer, is proposed to address this issue.
arXiv Detail & Related papers (2023-04-30T11:42:06Z) - Stop ordering machine learning algorithms by their explainability! A
user-centered investigation of performance and explainability [0.0]
We study tradeoff between model performance and explainability of machine learning algorithms.
We find that the tradeoff is much less gradual in the end user's perception.
Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception.
arXiv Detail & Related papers (2022-06-20T08:32:38Z) - Learning to Scaffold: Optimizing Model Explanations for Teaching [74.25464914078826]
We train models on three natural language processing and computer vision tasks.
We find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods.
arXiv Detail & Related papers (2022-04-22T16:43:39Z) - Features of Explainability: How users understand counterfactual and
causal explanations for categorical and continuous features in XAI [10.151828072611428]
Counterfactual explanations are increasingly used to address interpretability, recourse, and bias in AI decisions.
We tested the effects of counterfactual and causal explanations on the objective accuracy of users predictions.
We also found that users understand explanations referring to categorical features more readily than those referring to continuous features.
arXiv Detail & Related papers (2022-04-21T15:01:09Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Explanations of Black-Box Model Predictions by Contextual Importance and
Utility [1.7188280334580195]
We present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations easily understandable by experts as well as novice users.
This method explains the prediction results without transforming the model into an interpretable one.
We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation.
arXiv Detail & Related papers (2020-05-30T06:49:50Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Deceptive AI Explanations: Creation and Detection [3.197020142231916]
We investigate how AI models can be used to create and detect deceptive explanations.
As an empirical evaluation, we focus on text classification and alter the explanations generated by GradCAM.
We evaluate the effect of deceptive explanations on users in an experiment with 200 participants.
arXiv Detail & Related papers (2020-01-21T16:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.