Evaluation of Saliency-based Explainability Method
- URL: http://arxiv.org/abs/2106.12773v1
- Date: Thu, 24 Jun 2021 05:40:50 GMT
- Title: Evaluation of Saliency-based Explainability Method
- Authors: Sam Zabdiel Sunder Samuel, Vidhya Kamakshi, Namrata Lodhi and
Narayanan C Krishnan
- Abstract summary: A class of Explainable AI (XAI) methods provide saliency maps to highlight part of the image a CNN model looks at to classify the image as a way to explain its working.
These methods provide an intuitive way for users to understand predictions made by CNNs.
Other than quantitative computational tests, the vast majority of evidence to highlight that the methods are valuable is anecdotal.
- Score: 2.733700237741334
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A particular class of Explainable AI (XAI) methods provide saliency maps to
highlight part of the image a Convolutional Neural Network (CNN) model looks at
to classify the image as a way to explain its working. These methods provide an
intuitive way for users to understand predictions made by CNNs. Other than
quantitative computational tests, the vast majority of evidence to highlight
that the methods are valuable is anecdotal. Given that humans would be the
end-users of such methods, we devise three human subject experiments through
which we gauge the effectiveness of these saliency-based explainability
methods.
Related papers
- Faithful and Plausible Natural Language Explanations for Image Classification: A Pipeline Approach [10.54430941755474]
This paper proposes a post-hoc natural language explanation method that can be applied to any CNN-based classification system.
By analysing influential neurons and the corresponding activation maps, the method generates a faithful description of the classifier's decision process.
Experimental results show that the NLEs constructed by our method are significantly more plausible and faithful.
arXiv Detail & Related papers (2024-07-30T15:17:15Z) - An AI Architecture with the Capability to Explain Recognition Results [0.0]
This research focuses on the importance of metrics to explainability and contributes two methods yielding performance gains.
The first method introduces a combination of explainable and unexplainable flows, proposing a metric to characterize explainability of a decision.
The second method compares classic metrics for estimating the effectiveness of neural networks in the system, posing a new metric as the leading performer.
arXiv Detail & Related papers (2024-06-13T02:00:13Z) - Classification Metrics for Image Explanations: Towards Building Reliable XAI-Evaluations [0.24578723416255752]
Saliency methods provide (super-)pixelwise feature attribution scores for input images.
New evaluation metrics for saliency methods are developed and common saliency methods are benchmarked on ImageNet.
A scheme for reliability evaluation of such metrics is proposed that is based on concepts from psychometric testing.
arXiv Detail & Related papers (2024-06-07T16:37:50Z) - On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis [1.55858752644861]
State of the art indicates that hidden node activations can, in some cases, be interpretable in a way that makes sense to humans.
We introduce a novel model-agnostic post-hoc Explainable AI method demonstrating that it provides meaningful interpretations.
arXiv Detail & Related papers (2024-04-21T07:57:45Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - On The Coherence of Quantitative Evaluation of Visual Explanations [0.7212939068975619]
Evaluation methods have been proposed to assess the "goodness" of visual explanations.
We study a subset of the ImageNet-1k validation set where we evaluate a number of different commonly-used explanation methods.
Results of our study suggest that there is a lack of coherency on the grading provided by some of the considered evaluation methods.
arXiv Detail & Related papers (2023-02-14T13:41:57Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Visualization of Supervised and Self-Supervised Neural Networks via
Attribution Guided Factorization [87.96102461221415]
We develop an algorithm that provides per-class explainability.
In an extensive battery of experiments, we demonstrate the ability of our methods to class-specific visualization.
arXiv Detail & Related papers (2020-12-03T18:48:39Z) - Evaluating Explainable AI: Which Algorithmic Explanations Help Users
Predict Model Behavior? [97.77183117452235]
We carry out human subject tests to isolate the effect of algorithmic explanations on model interpretability.
Clear evidence of method effectiveness is found in very few cases.
Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability.
arXiv Detail & Related papers (2020-05-04T20:35:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.