Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
Prediction
- URL: http://arxiv.org/abs/2007.12248v1
- Date: Thu, 23 Jul 2020 20:39:40 GMT
- Title: Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
Prediction
- Authors: Eric Chu, Deb Roy, Jacob Andreas
- Abstract summary: We study explanations based on visual saliency in an image-based age prediction task.
We find that presenting model predictions improves human accuracy.
However, explanations of various kinds fail to significantly alter human accuracy or trust in the model.
- Score: 49.254162397086006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a randomized controlled trial for a model-in-the-loop regression
task, with the goal of measuring the extent to which (1) good explanations of
model predictions increase human accuracy, and (2) faulty explanations decrease
human trust in the model. We study explanations based on visual saliency in an
image-based age prediction task for which humans and learned models are
individually capable but not highly proficient and frequently disagree. Our
experimental design separates model quality from explanation quality, and makes
it possible to compare treatments involving a variety of explanations of
varying levels of quality. We find that presenting model predictions improves
human accuracy. However, visual explanations of various kinds fail to
significantly alter human accuracy or trust in the model - regardless of
whether explanations characterize an accurate model, an inaccurate one, or are
generated randomly and independently of the input image. These findings suggest
the need for greater evaluation of explanations in downstream decision making
tasks, better design-based tools for presenting explanations to users, and
better approaches for generating explanations.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Training Deep Models to be Explained with Fewer Examples [40.58343220792933]
We train prediction and explanation models simultaneously with a sparse regularizer for reducing the number of examples.
Experiments using several datasets demonstrate that the proposed method improves faithfulness while keeping the predictive performance.
arXiv Detail & Related papers (2021-12-07T05:39:21Z) - On the Lack of Robust Interpretability of Neural Text Classifiers [14.685352584216757]
We assess the robustness of interpretations of neural text classifiers based on pretrained Transformer encoders.
Both tests show surprising deviations from expected behavior, raising questions about the extent of insights that practitioners may draw from interpretations.
arXiv Detail & Related papers (2021-06-08T18:31:02Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - To what extent do human explanations of model behavior align with actual
model behavior? [91.67905128825402]
We investigated the extent to which human-generated explanations of models' inference decisions align with how models actually make these decisions.
We defined two alignment metrics that quantify how well natural language human explanations align with model sensitivity to input words.
We find that a model's alignment with human explanations is not predicted by the model's accuracy on NLI.
arXiv Detail & Related papers (2020-12-24T17:40:06Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.