Generating Attribution Maps with Disentangled Masked Backpropagation
- URL: http://arxiv.org/abs/2101.06773v1
- Date: Sun, 17 Jan 2021 20:32:14 GMT
- Title: Generating Attribution Maps with Disentangled Masked Backpropagation
- Authors: Adria Ruiz, Antonio Agudo and Francesc Moreno
- Abstract summary: We introduce Disentangled Masked Backpropagation (DMBP) to decompose the model function into different linear mappings.
DMBP generates more visually interpretable attribution maps than previous approaches.
We quantitatively show that the maps produced by our method are more consistent with the true contribution of each pixel to the final network output.
- Score: 22.065454879517326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Attribution map visualization has arisen as one of the most effective
techniques to understand the underlying inference process of Convolutional
Neural Networks. In this task, the goal is to compute an score for each image
pixel related with its contribution to the final network output. In this paper,
we introduce Disentangled Masked Backpropagation (DMBP), a novel gradient-based
method that leverages on the piecewise linear nature of ReLU networks to
decompose the model function into different linear mappings. This decomposition
aims to disentangle the positive, negative and nuisance factors from the
attribution maps by learning a set of variables masking the contribution of
each filter during back-propagation. A thorough evaluation over standard
architectures (ResNet50 and VGG16) and benchmark datasets (PASCAL VOC and
ImageNet) demonstrates that DMBP generates more visually interpretable
attribution maps than previous approaches. Additionally, we quantitatively show
that the maps produced by our method are more consistent with the true
contribution of each pixel to the final network output.
Related papers
- Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
Shapley Value [86.69600830581912]
We develop a novel visual explanation method called Shap-CAM based on class activation mapping.
We demonstrate that Shap-CAM achieves better visual performance and fairness for interpreting the decision making process.
arXiv Detail & Related papers (2022-08-07T00:59:23Z) - Entangled Residual Mappings [59.02488598557491]
We introduce entangled residual mappings to generalize the structure of the residual connections.
An entangled residual mapping replaces the identity skip connections with specialized entangled mappings.
We show that while entangled mappings can preserve the iterative refinement of features across various deep models, they influence the representation learning process in convolutional networks.
arXiv Detail & Related papers (2022-06-02T19:36:03Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - DFM: A Performance Baseline for Deep Feature Matching [10.014010310188821]
The proposed method uses pre-trained VGG architecture as a feature extractor and does not require any additional training specific to improve matching.
Our algorithm achieves 0.57 and 0.80 overall scores in terms of Mean Matching Accuracy (MMA) for 1 pixel and 2 pixels thresholds respectively on Hpatches dataset.
arXiv Detail & Related papers (2021-06-14T22:55:06Z) - Image Inpainting with Edge-guided Learnable Bidirectional Attention Maps [85.67745220834718]
We present an edge-guided learnable bidirectional attention map (Edge-LBAM) for improving image inpainting of irregular holes.
Our Edge-LBAM method contains dual procedures,including structure-aware mask-updating guided by predict edges.
Extensive experiments show that our Edge-LBAM is effective in generating coherent image structures and preventing color discrepancy and blurriness.
arXiv Detail & Related papers (2021-04-25T07:25:16Z) - Enhancing Deep Neural Network Saliency Visualizations with Gradual
Extrapolation [0.0]
We propose an enhancement technique of the Class Activation Mapping methods like Grad-CAM or Excitation Backpropagation.
Our idea, called Gradual Extrapolation, can supplement any method that generates a heatmap picture by sharpening the output.
arXiv Detail & Related papers (2021-04-11T07:39:35Z) - Layer Decomposition Learning Based on Gaussian Convolution Model and
Residual Deblurring for Inverse Halftoning [7.462336024223669]
Layer decomposition to separate an input image into base and detail layers has been steadily used for image restoration.
In inverse halftoning, homogenous dot patterns hinder a small output range from the residual layers.
A new layer decomposition network based on the Gaussian convolution model (GCM) and structure-aware deblurring strategy is presented.
arXiv Detail & Related papers (2020-12-27T09:15:00Z) - Convolutional Neural Networks from Image Markers [62.997667081978825]
Feature Learning from Image Markers (FLIM) was recently proposed to estimate convolutional filters, with no backpropagation, from strokes drawn by a user on very few images.
This paper extends FLIM for fully connected layers and demonstrates it on different image classification problems.
The results show that FLIM-based convolutional neural networks can outperform the same architecture trained from scratch by backpropagation.
arXiv Detail & Related papers (2020-12-15T22:58:23Z) - Learning Propagation Rules for Attribution Map Generation [146.71503336770886]
We propose a dedicated method to generate attribution maps that allow us to learn the propagation rules automatically.
Specifically, we introduce a learnable plugin module, which enables adaptive propagation rules for each pixel.
The introduced learnable module can be trained under any auto-grad framework with higher-order differential support.
arXiv Detail & Related papers (2020-10-14T16:23:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.