DECE: Decision Explorer with Counterfactual Explanations for Machine
Learning Models
- URL: http://arxiv.org/abs/2008.08353v1
- Date: Wed, 19 Aug 2020 09:44:47 GMT
- Title: DECE: Decision Explorer with Counterfactual Explanations for Machine
Learning Models
- Authors: Furui Cheng, Yao Ming, Huamin Qu
- Abstract summary: We exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models.
We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets.
- Score: 36.50754934147469
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With machine learning models being increasingly applied to various
decision-making scenarios, people have spent growing efforts to make machine
learning models more transparent and explainable. Among various explanation
techniques, counterfactual explanations have the advantages of being
human-friendly and actionable -- a counterfactual explanation tells the user
how to gain the desired prediction with minimal changes to the input. Besides,
counterfactual explanations can also serve as efficient probes to the models'
decisions. In this work, we exploit the potential of counterfactual
explanations to understand and explore the behavior of machine learning models.
We design DECE, an interactive visualization system that helps understand and
explore a model's decisions on individual instances and data subsets,
supporting users ranging from decision-subjects to model developers. DECE
supports exploratory analysis of model decisions by combining the strengths of
counterfactual explanations at instance- and subgroup-levels. We also introduce
a set of interactions that enable users to customize the generation of
counterfactual explanations to find more actionable ones that can suit their
needs. Through three use cases and an expert interview, we demonstrate the
effectiveness of DECE in supporting decision exploration tasks and instance
explanations.
Related papers
- Diffexplainer: Towards Cross-modal Global Explanations with Diffusion Models [51.21351775178525]
DiffExplainer is a novel framework that, leveraging language-vision models, enables multimodal global explainability.
It employs diffusion models conditioned on optimized text prompts, synthesizing images that maximize class outputs.
The analysis of generated visual descriptions allows for automatic identification of biases and spurious features.
arXiv Detail & Related papers (2024-04-03T10:11:22Z) - I-CEE: Tailoring Explanations of Image Classification Models to User
Expertise [13.293968260458962]
We present I-CEE, a framework that provides Image Classification Explanations tailored to User Expertise.
I-CEE models the informativeness of the example images to depend on user expertise, resulting in different examples for different users.
Experiments with simulated users show that I-CEE improves users' ability to accurately predict the model's decisions.
arXiv Detail & Related papers (2023-12-19T12:26:57Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Predictability and Comprehensibility in Post-Hoc XAI Methods: A
User-Centered Analysis [6.606409729669314]
Post-hoc explainability methods aim to clarify predictions of black-box machine learning models.
We conduct a user study to evaluate comprehensibility and predictability in two widely used tools: LIME and SHAP.
We find that the comprehensibility of SHAP is significantly reduced when explanations are provided for samples near a model's decision boundary.
arXiv Detail & Related papers (2023-09-21T11:54:20Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Explanation as a process: user-centric construction of multi-level and
multi-modal explanations [0.34410212782758043]
We present a process-based approach that combines multi-level and multi-modal explanations.
We use Inductive Logic Programming, an interpretable machine learning approach, to learn a comprehensible model.
arXiv Detail & Related papers (2021-10-07T19:26:21Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - ViCE: Visual Counterfactual Explanations for Machine Learning Models [13.94542147252982]
We present an interactive visual analytics tool, ViCE, that generates counterfactual explanations to contextualize and evaluate model decisions.
Results are effectively displayed in a visual interface where counterfactual explanations are highlighted and interactive methods are provided for users to explore the data and model.
arXiv Detail & Related papers (2020-03-05T04:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.