Learning Propagation Rules for Attribution Map Generation
- URL: http://arxiv.org/abs/2010.07210v1
- Date: Wed, 14 Oct 2020 16:23:58 GMT
- Title: Learning Propagation Rules for Attribution Map Generation
- Authors: Yiding Yang, Jiayan Qiu, Mingli Song, Dacheng Tao, Xinchao Wang
- Abstract summary: We propose a dedicated method to generate attribution maps that allow us to learn the propagation rules automatically.
Specifically, we introduce a learnable plugin module, which enables adaptive propagation rules for each pixel.
The introduced learnable module can be trained under any auto-grad framework with higher-order differential support.
- Score: 146.71503336770886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior gradient-based attribution-map methods rely on handcrafted propagation
rules for the non-linear/activation layers during the backward pass, so as to
produce gradients of the input and then the attribution map. Despite the
promising results achieved, such methods are sensitive to the non-informative
high-frequency components and lack adaptability for various models and samples.
In this paper, we propose a dedicated method to generate attribution maps that
allow us to learn the propagation rules automatically, overcoming the flaws of
the handcrafted ones. Specifically, we introduce a learnable plugin module,
which enables adaptive propagation rules for each pixel, to the non-linear
layers during the backward pass for mask generating. The masked input image is
then fed into the model again to obtain new output that can be used as a
guidance when combined with the original one. The introduced learnable module
can be trained under any auto-grad framework with higher-order differential
support. As demonstrated on five datasets and six network architectures, the
proposed method yields state-of-the-art results and gives cleaner and more
visually plausible attribution maps.
Related papers
- DiffusionMat: Alpha Matting as Sequential Refinement Learning [87.76572845943929]
DiffusionMat is an image matting framework that employs a diffusion model for the transition from coarse to refined alpha mattes.
A correction module adjusts the output at each denoising step, ensuring that the final result is consistent with the input image's structures.
We evaluate our model across several image matting benchmarks, and the results indicate that DiffusionMat consistently outperforms existing methods.
arXiv Detail & Related papers (2023-11-22T17:16:44Z) - End-to-End Diffusion Latent Optimization Improves Classifier Guidance [81.27364542975235]
Direct Optimization of Diffusion Latents (DOODL) is a novel guidance method.
It enables plug-and-play guidance by optimizing diffusion latents.
It outperforms one-step classifier guidance on computational and human evaluation metrics.
arXiv Detail & Related papers (2023-03-23T22:43:52Z) - Gradient-Based Adversarial and Out-of-Distribution Detection [15.510581400494207]
We introduce confounding labels in gradient generation to probe the effective expressivity of neural networks.
We show that our gradient-based approach allows for capturing the anomaly in inputs based on the effective expressivity of the models.
arXiv Detail & Related papers (2022-06-16T15:50:41Z) - Neural Jacobian Fields: Learning Intrinsic Mappings of Arbitrary Meshes [38.157373733083894]
This paper introduces a framework designed to accurately predict piecewise linear mappings of arbitrary meshes via a neural network.
The framework is based on reducing the neural aspect to a prediction of a matrix for a single point, conditioned on a global shape descriptor.
By operating in the intrinsic gradient domain of each individual mesh, it allows the framework to predict highly-accurate mappings.
arXiv Detail & Related papers (2022-05-05T19:51:13Z) - Revealing and Protecting Labels in Distributed Training [3.18475216176047]
We propose a method to discover the set of labels of training samples from only the gradient of the last layer and the id to label mapping.
We demonstrate the effectiveness of our method for model training in two domains - image classification, and automatic speech recognition.
arXiv Detail & Related papers (2021-10-31T17:57:49Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Enhancing Deep Neural Network Saliency Visualizations with Gradual
Extrapolation [0.0]
We propose an enhancement technique of the Class Activation Mapping methods like Grad-CAM or Excitation Backpropagation.
Our idea, called Gradual Extrapolation, can supplement any method that generates a heatmap picture by sharpening the output.
arXiv Detail & Related papers (2021-04-11T07:39:35Z) - Generating Attribution Maps with Disentangled Masked Backpropagation [22.065454879517326]
We introduce Disentangled Masked Backpropagation (DMBP) to decompose the model function into different linear mappings.
DMBP generates more visually interpretable attribution maps than previous approaches.
We quantitatively show that the maps produced by our method are more consistent with the true contribution of each pixel to the final network output.
arXiv Detail & Related papers (2021-01-17T20:32:14Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z) - Embedding Propagation: Smoother Manifold for Few-Shot Classification [131.81692677836202]
We propose to use embedding propagation as an unsupervised non-parametric regularizer for manifold smoothing in few-shot classification.
We empirically show that embedding propagation yields a smoother embedding manifold.
We show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16% points.
arXiv Detail & Related papers (2020-03-09T13:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.