iGOS++: Integrated Gradient Optimized Saliency by Bilateral
Perturbations
- URL: http://arxiv.org/abs/2012.15783v1
- Date: Thu, 31 Dec 2020 18:04:12 GMT
- Title: iGOS++: Integrated Gradient Optimized Saliency by Bilateral
Perturbations
- Authors: Saeed Khorram, Tyler Lawson, Fuxin Li
- Abstract summary: Saliency maps are widely-used local explanation tools.
We present iGOS++, a framework to generate saliency maps optimized for altering the output of the black-box system.
- Score: 31.72311989250957
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The black-box nature of the deep networks makes the explanation for "why"
they make certain predictions extremely challenging. Saliency maps are one of
the most widely-used local explanation tools to alleviate this problem. One of
the primary approaches for generating saliency maps is by optimizing a mask
over the input dimensions so that the output of the network is influenced the
most by the masking. However, prior work only studies such influence by
removing evidence from the input. In this paper, we present iGOS++, a framework
to generate saliency maps that are optimized for altering the output of the
black-box system by either removing or preserving only a small fraction of the
input. Additionally, we propose to add a bilateral total variation term to the
optimization that improves the continuity of the saliency map especially under
high resolution and with thin object parts. The evaluation results from
comparing iGOS++ against state-of-the-art saliency map methods show significant
improvement in locating salient regions that are directly interpretable by
humans. We utilized iGOS++ in the task of classifying COVID-19 cases from x-ray
images and discovered that sometimes the CNN network is overfitted to the
characters printed on the x-ray images when performing classification. Fixing
this issue by data cleansing significantly improved the precision and recall of
the classifier.
Related papers
- Addressing a fundamental limitation in deep vision models: lack of spatial attention [43.37813040320147]
The aim of this manuscript is to underscore a significant limitation in current deep learning models, particularly vision models.
Unlike human vision, which efficiently selects only the essential visual areas for further processing, deep vision models process the entire image.
We propose two solutions that could pave the way for the next generation of more efficient vision models.
arXiv Detail & Related papers (2024-07-01T20:21:09Z) - Hi-Map: Hierarchical Factorized Radiance Field for High-Fidelity
Monocular Dense Mapping [51.739466714312805]
We introduce Hi-Map, a novel monocular dense mapping approach based on Neural Radiance Field (NeRF)
Hi-Map is exceptional in its capacity to achieve efficient and high-fidelity mapping using only posed RGB inputs.
arXiv Detail & Related papers (2024-01-06T12:32:25Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - High-fidelity Pseudo-labels for Boosting Weakly-Supervised Segmentation [17.804090651425955]
Image-level weakly-supervised segmentation (WSSS) reduces the usually vast data annotation cost by surrogate segmentation masks during training.
Our work is based on two techniques for improving CAMs; importance sampling, which is a substitute for GAP, and the feature similarity loss.
We reformulate both techniques based on binomial posteriors of multiple independent binary problems.
This has two benefits; their performance is improved and they become more general, resulting in an add-on method that can boost virtually any WSSS method.
arXiv Detail & Related papers (2023-04-05T17:43:57Z) - FORBID: Fast Overlap Removal By stochastic gradIent Descent for Graph
Drawing [1.1470070927586014]
Overlaps between nodes can hinder graph visualization readability.
Overlap Removal (OR) algorithms have been proposed as layout post-processing.
We propose a novel gradient models OR as a joint stress and scaling optimization problem.
arXiv Detail & Related papers (2022-08-19T13:51:44Z) - Abs-CAM: A Gradient Optimization Interpretable Approach for Explanation
of Convolutional Neural Networks [7.71412567705588]
Class activation mapping-based method has been widely used to interpret the internal decisions of models in computer vision tasks.
We propose an Absolute value Class Activation Mapping-based (Abs-CAM) method, which optimize the gradients derived from the backpropagation.
The framework of Abs-CAM is divided into two phases: generating initial saliency map and generating final saliency map.
arXiv Detail & Related papers (2022-07-08T02:06:46Z) - Unpaired Image Super-Resolution with Optimal Transport Maps [128.1189695209663]
Real-world image super-resolution (SR) tasks often do not have paired datasets limiting the application of supervised techniques.
We propose an algorithm for unpaired SR which learns an unbiased OT map for the perceptual transport cost.
Our algorithm provides nearly state-of-the-art performance on the large-scale unpaired AIM-19 dataset.
arXiv Detail & Related papers (2022-02-02T16:21:20Z) - Generative Modeling with Optimal Transport Maps [83.59805931374197]
Optimal Transport (OT) has become a powerful tool for large-scale generative modeling tasks.
We show that the OT map itself can be used as a generative model, providing comparable performance.
arXiv Detail & Related papers (2021-10-06T18:17:02Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Rethinking Localization Map: Towards Accurate Object Perception with
Self-Enhancement Maps [78.2581910688094]
This work introduces a novel self-enhancement method to harvest accurate object localization maps and object boundaries with only category labels as supervision.
In particular, the proposed Self-Enhancement Maps achieve the state-of-the-art localization accuracy of 54.88% on ILSVRC.
arXiv Detail & Related papers (2020-06-09T12:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.