ViCE: Visual Counterfactual Explanations for Machine Learning Models
- URL: http://arxiv.org/abs/2003.02428v1
- Date: Thu, 5 Mar 2020 04:43:02 GMT
- Title: ViCE: Visual Counterfactual Explanations for Machine Learning Models
- Authors: Oscar Gomez, Steffen Holter, Jun Yuan, Enrico Bertini
- Abstract summary: We present an interactive visual analytics tool, ViCE, that generates counterfactual explanations to contextualize and evaluate model decisions.
Results are effectively displayed in a visual interface where counterfactual explanations are highlighted and interactive methods are provided for users to explore the data and model.
- Score: 13.94542147252982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The continued improvements in the predictive accuracy of machine learning
models have allowed for their widespread practical application. Yet, many
decisions made with seemingly accurate models still require verification by
domain experts. In addition, end-users of a model also want to understand the
reasons behind specific decisions. Thus, the need for interpretability is
increasingly paramount. In this paper we present an interactive visual
analytics tool, ViCE, that generates counterfactual explanations to
contextualize and evaluate model decisions. Each sample is assessed to identify
the minimal set of changes needed to flip the model's output. These
explanations aim to provide end-users with personalized actionable insights
with which to understand, and possibly contest or improve, automated decisions.
The results are effectively displayed in a visual interface where
counterfactual explanations are highlighted and interactive methods are
provided for users to explore the data and model. The functionality of the tool
is demonstrated by its application to a home equity line of credit dataset.
Related papers
- Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - InterVLS: Interactive Model Understanding and Improvement with Vision-Language Surrogates [18.793275018467163]
Deep learning models are widely used in critical applications, highlighting the need for pre-deployment model understanding and improvement.
Visual concept-based methods, while increasingly used for this purpose, face challenges: (1) most concepts lack interpretability, (2) existing methods require model knowledge, often unavailable at run time, and (3) there lacks a no-code method for post-understanding model improvement.
We present InterVLS, which facilitates model understanding by discovering text-aligned concepts, measuring their influence with model-agnostic linear surrogates.
arXiv Detail & Related papers (2023-11-06T21:30:59Z) - AcME -- Accelerated Model-agnostic Explanations: Fast Whitening of the
Machine-Learning Black Box [1.7534486934148554]
interpretability approaches should provide actionable insights without making the users wait.
We propose Accelerated Model-agnostic Explanations (AcME), an interpretability approach that quickly provides feature importance scores both at the global and the local level.
AcME computes feature ranking, but it also provides a what-if analysis tool to assess how changes in features values would affect model predictions.
arXiv Detail & Related papers (2021-12-23T15:18:13Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - AdViCE: Aggregated Visual Counterfactual Explanations for Machine
Learning Model Validation [9.996986104171754]
We introduce AdViCE, a visual analytics tool that aims to guide users in black-box model debug and validation.
The solution rests on two main visual user interface innovations: (1) an interactive visualization that enables the comparison of decisions on user-defined data subsets; (2) an algorithm and visual design to compute and visualize counterfactual explanations.
arXiv Detail & Related papers (2021-09-12T22:52:12Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Intuitively Assessing ML Model Reliability through Example-Based
Explanations and Editing Model Inputs [19.09848738521126]
Interpretability methods aim to help users build trust in and understand the capabilities of machine learning models.
We present two interface modules to facilitate a more intuitive assessment of model reliability.
arXiv Detail & Related papers (2021-02-17T02:41:32Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - DECE: Decision Explorer with Counterfactual Explanations for Machine
Learning Models [36.50754934147469]
We exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models.
We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets.
arXiv Detail & Related papers (2020-08-19T09:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.