DANCE: Enhancing saliency maps using decoys
- URL: http://arxiv.org/abs/2002.00526v3
- Date: Mon, 14 Jun 2021 15:31:30 GMT
- Title: DANCE: Enhancing saliency maps using decoys
- Authors: Yang Lu, Wenbo Guo, Xinyu Xing, William Stafford Noble
- Abstract summary: We propose a framework that improves the robustness of saliency methods by following a two-step procedure.
First, we introduce a perturbation mechanism that subtly varies the input sample without changing its intermediate representations.
Second, we compute saliency maps for perturbed samples and propose a new method to aggregate saliency maps.
- Score: 35.46266461621123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Saliency methods can make deep neural network predictions more interpretable
by identifying a set of critical features in an input sample, such as pixels
that contribute most strongly to a prediction made by an image classifier.
Unfortunately, recent evidence suggests that many saliency methods poorly
perform, especially in situations where gradients are saturated, inputs contain
adversarial perturbations, or predictions rely upon inter-feature dependence.
To address these issues, we propose a framework that improves the robustness of
saliency methods by following a two-step procedure. First, we introduce a
perturbation mechanism that subtly varies the input sample without changing its
intermediate representations. Using this approach, we can gather a corpus of
perturbed data samples while ensuring that the perturbed and original input
samples follow the same distribution. Second, we compute saliency maps for the
perturbed samples and propose a new method to aggregate saliency maps. With
this design, we offset the gradient saturation influence upon interpretation.
From a theoretical perspective, we show the aggregated saliency map could not
only capture inter-feature dependence but, more importantly, robustify
interpretation against previously described adversarial perturbation methods.
Following our theoretical analysis, we present experimental results suggesting
that, both qualitatively and quantitatively, our saliency method outperforms
existing methods.
Related papers
- QGait: Toward Accurate Quantization for Gait Recognition with Binarized Input [17.017127559393398]
We propose a differentiable soft quantizer, which better simulates the gradient of the round function during backpropagation.
This enables the network to learn from subtle input perturbations.
We further refine the training strategy to ensure convergence while simulating quantization errors.
arXiv Detail & Related papers (2024-05-22T17:34:18Z) - Implicit Variational Inference for High-Dimensional Posteriors [7.924706533725115]
In variational inference, the benefits of Bayesian models rely on accurately capturing the true posterior distribution.
We propose using neural samplers that specify implicit distributions, which are well-suited for approximating complex multimodal and correlated posteriors.
Our approach introduces novel bounds for approximate inference using implicit distributions by locally linearising the neural sampler.
arXiv Detail & Related papers (2023-10-10T14:06:56Z) - Don't Lie to Me! Robust and Efficient Explainability with Verified
Perturbation Analysis [6.15738282053772]
We introduce EVA -- the first explainability method guarantee to have an exhaustive exploration of a perturbation space.
We leverage the beneficial properties of verified perturbation analysis to efficiently characterize the input variables that are most likely to drive the model decision.
arXiv Detail & Related papers (2022-02-15T21:13:55Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Deep learning: a statistical viewpoint [120.94133818355645]
Deep learning has revealed some major surprises from a theoretical perspective.
In particular, simple gradient methods easily find near-perfect solutions to non-optimal training problems.
We conjecture that specific principles underlie these phenomena.
arXiv Detail & Related papers (2021-03-16T16:26:36Z) - Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency
Map Comparison [9.023847175654602]
In this paper we show that arguably neutral baseline images still impact the generated saliency maps and their evaluation with input perturbations.
We experimentally reveal inconsistencies among a selection of input perturbation methods and find that they lack robustness for generating saliency maps and for evaluating saliency maps as saliency metrics.
arXiv Detail & Related papers (2021-01-26T18:11:06Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z) - Calibrated Adversarial Refinement for Stochastic Semantic Segmentation [5.849736173068868]
We present a strategy for learning a calibrated predictive distribution over semantic maps, where the probability associated with each prediction reflects its ground truth correctness likelihood.
We demonstrate the versatility and robustness of the approach by achieving state-of-the-art results on the multigrader LIDC dataset and on a modified Cityscapes dataset with injected ambiguities.
We show that the core design can be adapted to other tasks requiring learning a calibrated predictive distribution by experimenting on a toy regression dataset.
arXiv Detail & Related papers (2020-06-23T16:39:59Z) - There and Back Again: Revisiting Backpropagation Saliency Methods [87.40330595283969]
Saliency methods seek to explain the predictions of a model by producing an importance map across each input sample.
A popular class of such methods is based on backpropagating a signal and analyzing the resulting gradient.
We propose a single framework under which several such methods can be unified.
arXiv Detail & Related papers (2020-04-06T17:58:08Z) - Almost-Matching-Exactly for Treatment Effect Estimation under Network
Interference [73.23326654892963]
We propose a matching method that recovers direct treatment effects from randomized experiments where units are connected in an observed network.
Our method matches units almost exactly on counts of unique subgraphs within their neighborhood graphs.
arXiv Detail & Related papers (2020-03-02T15:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.