Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations
- URL: http://arxiv.org/abs/2112.09669v1
- Date: Fri, 17 Dec 2021 18:29:56 GMT
- Title: Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations
- Authors: Siddhant Arora, Danish Pruthi, Norman Sadeh, William W. Cohen, Zachary
C. Lipton, Graham Neubig
- Abstract summary: We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
- Score: 97.91630330328815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In attempts to "explain" predictions of machine learning models, researchers
have proposed hundreds of techniques for attributing predictions to features
that are deemed important. While these attributions are often claimed to hold
the potential to improve human "understanding" of the models, surprisingly
little work explicitly evaluates progress towards this aspiration. In this
paper, we conduct a crowdsourcing study, where participants interact with
deception detection models that have been trained to distinguish between
genuine and fake hotel reviews. They are challenged both to simulate the model
on fresh reviews, and to edit reviews with the goal of lowering the probability
of the originally predicted class. Successful manipulations would lead to an
adversarial example. During the training (but not the test) phase, input spans
are highlighted to communicate salience. Through our evaluation, we observe
that for a linear bag-of-words model, participants with access to the feature
coefficients during training are able to cause a larger reduction in model
confidence in the testing phase when compared to the no-explanation control.
For the BERT-based classifier, popular local explanations do not improve their
ability to reduce the model confidence over the no-explanation case.
Remarkably, when the explanation for the BERT model is given by the (global)
attributions of a linear model trained to imitate the BERT model, people can
effectively manipulate the model.
Related papers
- Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models [6.394084132117747]
We propose a technique that leverages counterfactual generation to evaluate the faithfulness of attribution methods for autoregressive language models.
Our technique generates fluent, in-distribution counterfactuals, making the evaluation protocol more reliable.
arXiv Detail & Related papers (2024-08-21T00:17:59Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - Guide the Learner: Controlling Product of Experts Debiasing Method Based
on Token Attribution Similarities [17.082695183953486]
A popular workaround is to train a robust model by re-weighting training examples based on a secondary biased model.
Here, the underlying assumption is that the biased model resorts to shortcut features.
We introduce a fine-tuning strategy that incorporates the similarity between the main and biased model attribution scores in a Product of Experts loss function.
arXiv Detail & Related papers (2023-02-06T15:21:41Z) - VCNet: A self-explaining model for realistic counterfactual generation [52.77024349608834]
Counterfactual explanation is a class of methods to make local explanations of machine learning decisions.
We present VCNet-Variational Counter Net, a model architecture that combines a predictor and a counterfactual generator.
We show that VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem.
arXiv Detail & Related papers (2022-12-21T08:45:32Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Better sampling in explanation methods can prevent dieselgate-like
deception [0.0]
Interpretability of prediction models is necessary to determine their biases and causes of errors.
Popular techniques, such as IME, LIME, and SHAP, use perturbation of instance features to explain individual predictions.
We show that the improved sampling increases the robustness of the LIME and SHAP, while previously untested method IME is already the most robust of all.
arXiv Detail & Related papers (2021-01-26T13:41:37Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z) - Pair the Dots: Jointly Examining Training History and Test Stimuli for
Model Interpretability [44.60486560836836]
Any prediction from a model is made by a combination of learning history and test stimuli.
Existing methods to interpret a model's predictions are only able to capture a single aspect of either test stimuli or learning history.
We propose an efficient and differentiable approach to make it feasible to interpret a model's prediction by jointly examining training history and test stimuli.
arXiv Detail & Related papers (2020-10-14T10:45:01Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.