Sparse Fooling Images: Fooling Machine Perception through Unrecognizable
Images
- URL: http://arxiv.org/abs/2012.03843v1
- Date: Mon, 7 Dec 2020 16:47:33 GMT
- Title: Sparse Fooling Images: Fooling Machine Perception through Unrecognizable
Images
- Authors: Soichiro Kumano, Hiroshi Kera, Toshihiko Yamasaki
- Abstract summary: We propose a new class of fooling images, sparse fooling images (SFIs), which are single color images with a small number of altered pixels.
SFIs are recognizable as natural objects and classified to certain classes with high confidence scores.
This study gives rise to questions on the structure and robustness of CNNs and discusses the differences between human and machine perception.
- Score: 36.42135216182063
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In recent years, deep neural networks (DNNs) have achieved equivalent or even
higher accuracy in various recognition tasks than humans. However, some images
exist that lead DNNs to a completely wrong decision, whereas humans never fail
with these images. Among others, fooling images are those that are not
recognizable as natural objects such as dogs and cats, but DNNs classify these
images into classes with high confidence scores. In this paper, we propose a
new class of fooling images, sparse fooling images (SFIs), which are single
color images with a small number of altered pixels. Unlike existing fooling
images, which retain some characteristic features of natural objects, SFIs do
not have any local or global features that can be recognizable to humans;
however, in machine perception (i.e., by DNN classifiers), SFIs are
recognizable as natural objects and classified to certain classes with high
confidence scores. We propose two methods to generate SFIs for different
settings~(semiblack-box and white-box). We also experimentally demonstrate the
vulnerability of DNNs through out-of-distribution detection and compare three
architectures in terms of the robustness against SFIs. This study gives rise to
questions on the structure and robustness of CNNs and discusses the differences
between human and machine perception.
Related papers
- Development of a Dual-Input Neural Model for Detecting AI-Generated Imagery [0.0]
It is important to develop tools that are able to detect AI-generated images.
This paper proposes a dual-branch neural network architecture that takes both images and their Fourier frequency decomposition as inputs.
Our proposed model achieves an accuracy of 94% on the CIFAKE dataset, which significantly outperforms classic ML methods and CNNs.
arXiv Detail & Related papers (2024-06-19T16:42:04Z) - Feature CAM: Interpretable AI in Image Classification [2.4409988934338767]
There is a lack of trust to use Artificial Intelligence in critical and high-precision fields such as security, finance, health, and manufacturing industries.
We introduce a novel technique Feature CAM, which falls in the perturbation-activation combination, to create fine-grained, class-discriminative visualizations.
The resulting saliency maps proved to be 3-4 times better human interpretable than the state-of-the-art in ABM.
arXiv Detail & Related papers (2024-03-08T20:16:00Z) - Exploring Geometry of Blind Spots in Vision Models [56.47644447201878]
We study the phenomenon of under-sensitivity in vision models such as CNNs and Transformers.
We propose a Level Set Traversal algorithm that iteratively explores regions of high confidence with respect to the input space.
We estimate the extent of these connected higher-dimensional regions over which the model maintains a high degree of confidence.
arXiv Detail & Related papers (2023-10-30T18:00:33Z) - Divergences in Color Perception between Deep Neural Networks and Humans [3.0315685825606633]
We develop experiments for evaluating the perceptual coherence of color embeddings in deep neural networks (DNNs)
We assess how well these algorithms predict human color similarity judgments collected via an online survey.
We compare DNN performance against an interpretable and cognitively plausible model of color perception based on wavelet decomposition.
arXiv Detail & Related papers (2023-09-11T20:26:40Z) - Iris super-resolution using CNNs: is photo-realism important to iris
recognition? [67.42500312968455]
Single image super-resolution techniques are emerging, especially with the use of convolutional neural networks (CNNs)
In this work, the authors explore single image super-resolution using CNNs for iris recognition.
They validate their approach on a database of 1.872 near infrared iris images and on a mobile phone image database.
arXiv Detail & Related papers (2022-10-24T11:19:18Z) - Visual Recognition with Deep Nearest Centroids [57.35144702563746]
We devise deep nearest centroids (DNC), a conceptually elegant yet surprisingly effective network for large-scale visual recognition.
Compared with parametric counterparts, DNC performs better on image classification (CIFAR-10, ImageNet) and greatly boots pixel recognition (ADE20K, Cityscapes)
arXiv Detail & Related papers (2022-09-15T15:47:31Z) - Do DNNs trained on Natural Images acquire Gestalt Properties? [0.6091702876917281]
Deep Neural Networks (DNNs) trained on natural images have been proposed as compelling models of human vision.
We compared human and DNN responses in discrimination judgments.
We found that network trained on natural images exhibited sensitivity to shapes at the last stage of classification.
arXiv Detail & Related papers (2022-03-14T17:06:11Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Assessing The Importance Of Colours For CNNs In Object Recognition [70.70151719764021]
Convolutional neural networks (CNNs) have been shown to exhibit conflicting properties.
We demonstrate that CNNs often rely heavily on colour information while making a prediction.
We evaluate a model trained with congruent images on congruent, greyscale, and incongruent images.
arXiv Detail & Related papers (2020-12-12T22:55:06Z) - The shape and simplicity biases of adversarially robust ImageNet-trained
CNNs [9.707679445925516]
We study the shape bias and internal mechanisms that enable the generalizability of AlexNet, GoogLeNet, and ResNet-50 models trained via adversarial training.
Remarkably, adversarial training induces three simplicity biases into hidden neurons in the process of "robustifying" CNNs.
arXiv Detail & Related papers (2020-06-16T16:38:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.