Two-Stage Holistic and Contrastive Explanation of Image Classification
- URL: http://arxiv.org/abs/2306.06339v1
- Date: Sat, 10 Jun 2023 04:22:13 GMT
- Title: Two-Stage Holistic and Contrastive Explanation of Image Classification
- Authors: Weiyan Xie, Xiao-Hui Li, Zhi Lin, Leonard K. M. Poon, Caleb Chen Cao,
Nevin L. Zhang
- Abstract summary: A whole-output explanation can help a human user gain an overall understanding of model behaviour.
We propose a contrastive whole-output explanation (CWOX) method for image classification.
- Score: 16.303752364521454
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The need to explain the output of a deep neural network classifier is now
widely recognized. While previous methods typically explain a single class in
the output, we advocate explaining the whole output, which is a probability
distribution over multiple classes. A whole-output explanation can help a human
user gain an overall understanding of model behaviour instead of only one
aspect of it. It can also provide a natural framework where one can examine the
evidence used to discriminate between competing classes, and thereby obtain
contrastive explanations. In this paper, we propose a contrastive whole-output
explanation (CWOX) method for image classification, and evaluate it using
quantitative metrics and through human subject studies. The source code of CWOX
is available at https://github.com/vaynexie/CWOX.
Related papers
- Open-World Semi-Supervised Learning for Node Classification [53.07866559269709]
Open-world semi-supervised learning (Open-world SSL) for node classification is a practical but under-explored problem in the graph community.
We propose an IMbalance-Aware method named OpenIMA for Open-world semi-supervised node classification.
arXiv Detail & Related papers (2024-03-18T05:12:54Z) - DXAI: Explaining Classification by Image Decomposition [4.013156524547072]
We propose a new way to visualize neural network classification through a decomposition-based explainable AI (DXAI)
Instead of providing an explanation heatmap, our method yields a decomposition of the image into class-agnostic and class-distinct parts.
arXiv Detail & Related papers (2023-12-30T20:52:20Z) - What can we learn about a generated image corrupting its latent
representation? [57.1841740328509]
We investigate the hypothesis that we can predict image quality based on its latent representation in the GANs bottleneck.
We achieve this by corrupting the latent representation with noise and generating multiple outputs.
arXiv Detail & Related papers (2022-10-12T14:40:32Z) - Causality for Inherently Explainable Transformers: CAT-XPLAIN [16.85887568521622]
We utilize a recently proposed instance-wise post-hoc causal explanation method to make an existing transformer architecture inherently explainable.
Our model provides an explanation in the form of top-$k$ regions in the input space of the given instance contributing to its decision.
arXiv Detail & Related papers (2022-06-29T18:11:01Z) - Unsupervised Causal Binary Concepts Discovery with VAE for Black-box
Model Explanation [28.990604269473657]
We aim to explain a black-box classifier with the form: data X is classified as class Y because X textithas A, B and textitdoes not have C'
arXiv Detail & Related papers (2021-09-09T19:06:53Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Dependency Decomposition and a Reject Option for Explainable Models [4.94950858749529]
Recent deep learning models perform extremely well in various inference tasks.
Recent advances offer methods to visualize features, describe attribution of the input.
We present the first analysis of dependencies regarding the probability distribution over the desired image classification outputs.
arXiv Detail & Related papers (2020-12-11T17:39:33Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial
Explanations of Their Behavior in Natural Language? [86.60613602337246]
We introduce a leakage-adjusted simulatability (LAS) metric for evaluating NL explanations.
LAS measures how well explanations help an observer predict a model's output, while controlling for how explanations can directly leak the output.
We frame explanation generation as a multi-agent game and optimize explanations for simulatability while penalizing label leakage.
arXiv Detail & Related papers (2020-10-08T16:59:07Z) - Explainable Image Classification with Evidence Counterfactual [0.0]
We introduce SEDC as a model-agnostic instance-level explanation method for image classification.
For a given image, SEDC searches a small set of segments that, in case of removal, alters the classification.
We compare SEDC(-T) with popular feature importance methods such as LRP, LIME and SHAP, and we describe how the mentioned importance ranking issues are addressed.
arXiv Detail & Related papers (2020-04-16T08:02:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.