ADVISE: ADaptive Feature Relevance and VISual Explanations for
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2203.01289v1
- Date: Wed, 2 Mar 2022 18:16:57 GMT
- Title: ADVISE: ADaptive Feature Relevance and VISual Explanations for
Convolutional Neural Networks
- Authors: Mohammad Mahdi Dehshibi, Mona Ashtari-Majlan, Gereziher Adhane, David
Masip
- Abstract summary: We introduce ADVISE, a new explainability method that quantifies and leverages the relevance of each unit of the feature map to provide better visual explanations.
We extensively evaluate our idea in the image classification task using AlexNet, VGG16, ResNet50, and Xception pretrained on ImageNet.
Our experiments further show that ADVISE fulfils the sensitivity and implementation independence axioms while passing the sanity checks.
- Score: 0.745554610293091
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To equip Convolutional Neural Networks (CNNs) with explainability, it is
essential to interpret how opaque models take specific decisions, understand
what causes the errors, improve the architecture design, and identify unethical
biases in the classifiers. This paper introduces ADVISE, a new explainability
method that quantifies and leverages the relevance of each unit of the feature
map to provide better visual explanations. To this end, we propose using
adaptive bandwidth kernel density estimation to assign a relevance score to
each unit of the feature map with respect to the predicted class. We also
propose an evaluation protocol to quantitatively assess the visual
explainability of CNN models. We extensively evaluate our idea in the image
classification task using AlexNet, VGG16, ResNet50, and Xception pretrained on
ImageNet. We compare ADVISE with the state-of-the-art visual explainable
methods and show that the proposed method outperforms competing approaches in
quantifying feature-relevance and visual explainability while maintaining
competitive time complexity. Our experiments further show that ADVISE fulfils
the sensitivity and implementation independence axioms while passing the sanity
checks. The implementation is accessible for reproducibility purposes on
https://github.com/dehshibi/ADVISE.
Related papers
- Visual-TCAV: Concept-based Attribution and Saliency Maps for Post-hoc Explainability in Image Classification [3.9626211140865464]
Convolutional Neural Networks (CNNs) have seen significant performance improvements in recent years.
However, due to their size and complexity, they function as black-boxes, leading to transparency concerns.
This paper introduces a novel post-hoc explainability framework, Visual-TCAV, which aims to bridge the gap between these methods.
arXiv Detail & Related papers (2024-11-08T16:52:52Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - Structure Your Data: Towards Semantic Graph Counterfactuals [1.8817715864806608]
Counterfactual explanations (CEs) based on concepts are explanations that consider alternative scenarios to understand which high-level semantic features contributed to model predictions.
In this work, we propose CEs based on the semantic graphs accompanying input data to achieve more descriptive, accurate, and human-aligned explanations.
arXiv Detail & Related papers (2024-03-11T08:40:37Z) - Towards Better Visualizing the Decision Basis of Networks via Unfold and
Conquer Attribution Guidance [29.016425469068587]
We propose a novel framework, Unfold and Conquer Guidance (UCAG), which enhances the explainability of the network decision.
UCAG sequentially complies with the confidence of slices of the image, leading to providing an abundant and clear interpretation.
We conduct numerous evaluations to validate the performance in several metrics.
arXiv Detail & Related papers (2023-12-21T03:43:19Z) - SCAAT: Improving Neural Network Interpretability via Saliency
Constrained Adaptive Adversarial Training [10.716021768803433]
Saliency map is a common form of explanation illustrating the heatmap of feature attributions.
We propose a model-agnostic learning method called Saliency Constrained Adaptive Adversarial Training (SCAAT) to improve the quality of such DNN interpretability.
arXiv Detail & Related papers (2023-11-09T04:48:38Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - From Canonical Correlation Analysis to Self-supervised Graph Neural
Networks [99.44881722969046]
We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data.
We optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis.
Our method performs competitively on seven public graph datasets.
arXiv Detail & Related papers (2021-06-23T15:55:47Z) - Revisiting The Evaluation of Class Activation Mapping for
Explainability: A Novel Metric and Experimental Analysis [54.94682858474711]
Class Activation Mapping (CAM) approaches provide an effective visualization by taking weighted averages of the activation maps.
We propose a novel set of metrics to quantify explanation maps, which show better effectiveness and simplify comparisons between approaches.
arXiv Detail & Related papers (2021-04-20T21:34:24Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - Boundary Attributions Provide Normal (Vector) Explanations [27.20904776964045]
Boundary Attribution (BA) is a new explanation method to address this question.
BA involves computing normal vectors of the local decision boundaries for the target input.
We prove two theorems for ReLU networks: BA of randomized smoothed networks or robustly trained networks is much closer to non-boundary attribution methods than that in standard networks.
arXiv Detail & Related papers (2021-03-20T22:36:39Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.