White Box Methods for Explanations of Convolutional Neural Networks in
Image Classification Tasks
- URL: http://arxiv.org/abs/2104.02548v1
- Date: Tue, 6 Apr 2021 14:40:00 GMT
- Title: White Box Methods for Explanations of Convolutional Neural Networks in
Image Classification Tasks
- Authors: Meghna P Ayyar, Jenny Benois-Pineau, Akka Zemmari
- Abstract summary: Convolutional Neural Networks (CNNs) have demonstrated state of the art performance for the task of image classification.
Several approaches have been proposed to explain to understand the reasoning behind a prediction made by a network.
We focus primarily on white box methods that leverage the information of the internal architecture of a network to explain its decision.
- Score: 3.3959642559854357
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, deep learning has become prevalent to solve applications
from multiple domains. Convolutional Neural Networks (CNNs) particularly have
demonstrated state of the art performance for the task of image classification.
However, the decisions made by these networks are not transparent and cannot be
directly interpreted by a human. Several approaches have been proposed to
explain to understand the reasoning behind a prediction made by a network. In
this paper, we propose a topology of grouping these methods based on their
assumptions and implementations. We focus primarily on white box methods that
leverage the information of the internal architecture of a network to explain
its decision. Given the task of image classification and a trained CNN, this
work aims to provide a comprehensive and detailed overview of a set of methods
that can be used to create explanation maps for a particular image, that assign
an importance score to each pixel of the image based on its contribution to the
decision of the network. We also propose a further classification of the white
box methods based on their implementations to enable better comparisons and
help researchers find methods best suited for different scenarios.
Related papers
- InfoDisent: Explainability of Image Classification Models by Information Disentanglement [9.380255522558294]
We introduce InfoDisent, a hybrid model that combines the advantages of both approaches.
By utilizing an information bottleneck, InfoDisent disentangles the information in the final layer of a pre-trained deep network.
We validate the effectiveness of InfoDisent on benchmark datasets such as ImageNet, CUB-200-2011, Stanford Cars, and Stanford Dogs.
arXiv Detail & Related papers (2024-09-16T14:39:15Z) - DP-Net: Learning Discriminative Parts for image recognition [4.480595534587716]
DP-Net is a deep architecture with strong interpretation capabilities.
It exploits a pretrained Convolutional Neural Network (CNN) combined with a part-based recognition module.
arXiv Detail & Related papers (2024-04-23T13:42:12Z) - Understanding the Role of Pathways in a Deep Neural Network [4.456675543894722]
We analyze a convolutional neural network (CNN) trained in the classification task and present an algorithm to extract the diffusion pathways of individual pixels.
We find that the few largest pathways of an individual pixel from an image tend to cross the feature maps in each layer that is important for classification.
arXiv Detail & Related papers (2024-02-28T07:53:19Z) - Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
Shapley Value [86.69600830581912]
We develop a novel visual explanation method called Shap-CAM based on class activation mapping.
We demonstrate that Shap-CAM achieves better visual performance and fairness for interpreting the decision making process.
arXiv Detail & Related papers (2022-08-07T00:59:23Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Explainability-aided Domain Generalization for Image Classification [0.0]
We show that applying methods and architectures from the explainability literature can achieve state-of-the-art performance for the challenging task of domain generalization.
We develop a set of novel algorithms including DivCAM, an approach where the network receives guidance during training via gradient based class activation maps to focus on a diverse set of discriminative features.
Since these methods offer competitive performance on top of explainability, we argue that the proposed methods can be used as a tool to improve the robustness of deep neural network architectures.
arXiv Detail & Related papers (2021-04-05T02:27:01Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Attentive CutMix: An Enhanced Data Augmentation Approach for Deep
Learning Based Image Classification [58.20132466198622]
We propose Attentive CutMix, a naturally enhanced augmentation strategy based on CutMix.
In each training iteration, we choose the most descriptive regions based on the intermediate attention maps from a feature extractor.
Our proposed method is simple yet effective, easy to implement and can boost the baseline significantly.
arXiv Detail & Related papers (2020-03-29T15:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.