Quantifying Explainability of Saliency Methods in Deep Neural Networks
with a Synthetic Dataset
- URL: http://arxiv.org/abs/2009.02899v4
- Date: Sat, 7 May 2022 09:24:49 GMT
- Title: Quantifying Explainability of Saliency Methods in Deep Neural Networks
with a Synthetic Dataset
- Authors: Erico Tjoa, Cuntai Guan
- Abstract summary: This paper introduces a synthetic dataset that can be generated adhoc along with the ground-truth heatmaps for more objective quantitative assessment.
Each sample data is an image of a cell with easily recognized features that are distinguished from localization ground-truth mask.
- Score: 16.1448256306394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Post-hoc analysis is a popular category in eXplainable artificial
intelligence (XAI) study. In particular, methods that generate heatmaps have
been used to explain the deep neural network (DNN), a black-box model. Heatmaps
can be appealing due to the intuitive and visual ways to understand them but
assessing their qualities might not be straightforward. Different ways to
assess heatmaps' quality have their own merits and shortcomings. This paper
introduces a synthetic dataset that can be generated adhoc along with the
ground-truth heatmaps for more objective quantitative assessment. Each sample
data is an image of a cell with easily recognized features that are
distinguished from localization ground-truth mask, hence facilitating a more
transparent assessment of different XAI methods. Comparison and recommendations
are made, shortcomings are clarified along with suggestions for future research
directions to handle the finer details of select post-hoc analysis methods.
Related papers
- Robustness of Explainable Artificial Intelligence in Industrial Process Modelling [43.388607981317016]
We evaluate current XAI methods by scoring them based on ground truth simulations and sensitivity analysis.
We show the differences between XAI methods in their ability to correctly predict the true sensitivity of the modeled industrial process.
arXiv Detail & Related papers (2024-07-12T09:46:26Z) - Part-based Quantitative Analysis for Heatmaps [49.473051402754486]
Heatmaps have been instrumental in helping understand deep network decisions, and are a common approach for Explainable AI (XAI)
Heatmap analysis is typically very subjective and limited to domain experts.
arXiv Detail & Related papers (2024-05-22T00:24:17Z) - EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust
and Non-Robust Models [0.3425341633647624]
This paper focuses on evaluating methods of attribution mapping to find whether robust neural networks are more explainable.
We propose a new explainability faithfulness metric (called EvalAttAI) that addresses the limitations of prior metrics.
arXiv Detail & Related papers (2023-03-15T18:33:22Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Learning Topology from Synthetic Data for Unsupervised Depth Completion [66.26787962258346]
We present a method for inferring dense depth maps from images and sparse depth measurements.
We learn the association of sparse point clouds with dense natural shapes, and using the image as evidence to validate the predicted depth map.
arXiv Detail & Related papers (2021-06-06T00:21:12Z) - Evaluating Explainable Artificial Intelligence Methods for Multi-label
Deep Learning Classification Tasks in Remote Sensing [0.0]
We develop deep learning models with state-of-the-art performance in benchmark datasets.
Ten XAI methods were employed towards understanding and interpreting models' predictions.
Occlusion, Grad-CAM and Lime were the most interpretable and reliable XAI methods.
arXiv Detail & Related papers (2021-04-03T11:13:14Z) - Neural Network Attribution Methods for Problems in Geoscience: A Novel
Synthetic Benchmark Dataset [0.05156484100374058]
We provide a framework to generate attribution benchmark datasets for regression problems in the geosciences.
We train a fully-connected network to learn the underlying function that was used for simulation.
We compare estimated attribution heatmaps from different XAI methods to the ground truth in order to identify examples where specific XAI methods perform well or poorly.
arXiv Detail & Related papers (2021-03-18T03:39:17Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI [12.680653816836541]
We propose a ground truth based evaluation framework for XAI methods based on the CLEVR visual question answering task.
Our framework provides a (1) selective, (2) controlled and (3) realistic testbed for the evaluation of neural network explanations.
arXiv Detail & Related papers (2020-03-16T14:43:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.