Deeply Explain CNN via Hierarchical Decomposition
- URL: http://arxiv.org/abs/2201.09205v1
- Date: Sun, 23 Jan 2022 07:56:04 GMT
- Title: Deeply Explain CNN via Hierarchical Decomposition
- Authors: Ming-Ming Cheng, Peng-Tao Jiang, Ling-Hao Han, Liang Wang, Philip Torr
- Abstract summary: In computer vision, some attribution methods for explaining CNNs attempt to study how the intermediate features affect the network prediction.
This paper introduces a hierarchical decomposition framework to explain CNN's decision-making process in a top-down manner.
- Score: 75.01251659472584
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In computer vision, some attribution methods for explaining CNNs attempt to
study how the intermediate features affect the network prediction. However,
they usually ignore the feature hierarchies among the intermediate features.
This paper introduces a hierarchical decomposition framework to explain CNN's
decision-making process in a top-down manner. Specifically, we propose a
gradient-based activation propagation (gAP) module that can decompose any
intermediate CNN decision to its lower layers and find the supporting features.
Then we utilize the gAP module to iteratively decompose the network decision to
the supporting evidence from different CNN layers. The proposed framework can
generate a deep hierarchy of strongly associated supporting evidence for the
network decision, which provides insight into the decision-making process.
Moreover, gAP is effort-free for understanding CNN-based models without network
architecture modification and extra training process. Experiments show the
effectiveness of the proposed method. The code and interactive demo website
will be made publicly available.
Related papers
- CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - Interpreting CNN Predictions using Conditional Generative Adversarial
Networks [1.8416014644193066]
We train a conditional Generative Adversarial Network (GAN) to generate visual interpretations of a Convolutional Neural Network (CNN)
To comprehend a CNN, the GAN is trained with information on how the CNN processes an image when making predictions.
We developed a suitable representation of CNN architectures by cumulatively averaging intermediate interpretation maps.
arXiv Detail & Related papers (2023-01-19T13:26:12Z) - Demystifying CNNs for Images by Matched Filters [13.121514086503591]
convolution neural networks (CNN) have been revolutionising the way we approach and use intelligent machines in the Big Data era.
CNNs have been put under scrutiny owing to their textitblack-box nature, as well as the lack of theoretical support and physical meanings of their operation.
This paper attempts to demystify the operation of CNNs by employing the perspective of matched filtering.
arXiv Detail & Related papers (2022-10-16T12:39:17Z) - What Can Be Learnt With Wide Convolutional Neural Networks? [69.55323565255631]
We study infinitely-wide deep CNNs in the kernel regime.
We prove that deep CNNs adapt to the spatial scale of the target function.
We conclude by computing the generalisation error of a deep CNN trained on the output of another deep CNN.
arXiv Detail & Related papers (2022-08-01T17:19:32Z) - Interpretable Compositional Convolutional Neural Networks [20.726080433723922]
We propose a method to modify a traditional convolutional neural network (CNN) into an interpretable compositional CNN.
In a compositional CNN, each filter is supposed to consistently represent a specific compositional object part or image region with a clear meaning.
Our method can be broadly applied to different types of CNNs.
arXiv Detail & Related papers (2021-07-09T15:01:24Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Decoding CNN based Object Classifier Using Visualization [6.666597301197889]
We visualize what type of features are extracted in different convolution layers of CNN.
Visualizing heat map of activation helps us to understand how CNN classifies and localizes different objects in image.
arXiv Detail & Related papers (2020-07-15T05:01:27Z) - Transferable Perturbations of Deep Feature Distributions [102.94094966908916]
This work presents a new adversarial attack based on the modeling and exploitation of class-wise and layer-wise deep feature distributions.
We achieve state-of-the-art targeted blackbox transfer-based attack results for undefended ImageNet models.
arXiv Detail & Related papers (2020-04-27T00:32:25Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.