Visualizing Color-wise Saliency of Black-Box Image Classification Models
- URL: http://arxiv.org/abs/2010.02468v1
- Date: Tue, 6 Oct 2020 04:27:18 GMT
- Title: Visualizing Color-wise Saliency of Black-Box Image Classification Models
- Authors: Yuhki Hatakeyama (SenseTime Japan), Hiroki Sakuma (SenseTime Japan),
Yoshinori Konishi (SenseTime Japan), and Kohei Suenaga (Kyoto University)
- Abstract summary: A classification result given by an advanced method, including deep learning, is often hard to interpret.
We propose MC-RISE (Multi-Color RISE), which is an enhancement of RISE to take color information into account in an explanation.
Our method not only shows the saliency of each pixel in a given image as the original RISE does, but the significance of color components of each pixel; a saliency map with color information is useful especially in the domain where the color information matters.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image classification based on machine learning is being commonly used.
However, a classification result given by an advanced method, including deep
learning, is often hard to interpret. This problem of interpretability is one
of the major obstacles in deploying a trained model in safety-critical systems.
Several techniques have been proposed to address this problem; one of which is
RISE, which explains a classification result by a heatmap, called a saliency
map, which explains the significance of each pixel. We propose MC-RISE
(Multi-Color RISE), which is an enhancement of RISE to take color information
into account in an explanation. Our method not only shows the saliency of each
pixel in a given image as the original RISE does, but the significance of color
components of each pixel; a saliency map with color information is useful
especially in the domain where the color information matters (e.g.,
traffic-sign recognition). We implemented MC-RISE and evaluate them using two
datasets (GTSRB and ImageNet) to demonstrate the effectiveness of our methods
in comparison with existing techniques for interpreting image classification
results.
Related papers
- Exemplar-Based Image Colorization with A Learning Framework [7.793461393970992]
We propose an automatic colorization method with a learning framework.
It decouples the colorization process and learning process so as to generate various color styles for the same gray image.
It achieves comparable performance against the state-of-the-art colorization algorithms.
arXiv Detail & Related papers (2022-09-13T07:15:25Z) - ParaColorizer: Realistic Image Colorization using Parallel Generative
Networks [1.7778609937758327]
Grayscale image colorization is a fascinating application of AI for information restoration.
We present a parallel GAN-based colorization framework.
We show the shortcomings of the non-perceptual evaluation metrics commonly used to assess multi-modal problems.
arXiv Detail & Related papers (2022-08-17T13:49:44Z) - Immiscible Color Flows in Optimal Transport Networks for Image
Classification [68.8204255655161]
We propose a physics-inspired system that adapts Optimal Transport principles to leverage color distributions of images.
Our dynamics regulates immiscible of colors traveling on a network built from images.
Our method outperforms competitor algorithms on image classification tasks in datasets where color information matters.
arXiv Detail & Related papers (2022-05-04T12:41:36Z) - Transformer with Peak Suppression and Knowledge Guidance for
Fine-grained Image Recognition [24.02553270481428]
We propose a transformer architecture with the peak suppression module and knowledge guidance module.
The peak suppression module penalizes the attention to the most discriminative parts in the feature learning process.
The knowledge guidance module compares the image-based representation generated from the peak suppression module with the learnable knowledge embedding set to obtain the knowledge response coefficients.
arXiv Detail & Related papers (2021-07-14T08:07:58Z) - SCGAN: Saliency Map-guided Colorization with Generative Adversarial
Network [16.906813829260553]
We propose a fully automatic Saliency Map-guided Colorization with Generative Adversarial Network (SCGAN) framework.
It jointly predicts the colorization and saliency map to minimize semantic confusion and color bleeding.
Experimental results show that SCGAN can generate more reasonable colorized images than state-of-the-art techniques.
arXiv Detail & Related papers (2020-11-23T13:06:54Z) - Image Colorization: A Survey and Dataset [94.59768013860668]
This article presents a comprehensive survey of state-of-the-art deep learning-based image colorization techniques.
It categorizes the existing colorization techniques into seven classes and discusses important factors governing their performance.
We perform an extensive experimental evaluation of existing image colorization methods using both existing datasets and our proposed one.
arXiv Detail & Related papers (2020-08-25T01:22:52Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z) - Probabilistic Color Constancy [88.85103410035929]
We define a framework for estimating the illumination of a scene by weighting the contribution of different image regions.
The proposed method achieves competitive performance, compared to the state-of-the-art, on INTEL-TAU dataset.
arXiv Detail & Related papers (2020-05-06T11:03:05Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z) - Steering Self-Supervised Feature Learning Beyond Local Pixel Statistics [60.92229707497999]
We introduce a novel principle for self-supervised feature learning based on the discrimination of specific transformations of an image.
We demonstrate experimentally that learning to discriminate transformations such as LCI, image warping and rotations, yields features with state of the art generalization capabilities.
arXiv Detail & Related papers (2020-04-05T22:09:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.