Chromatic and spatial analysis of one-pixel attacks against an image
classifier
- URL: http://arxiv.org/abs/2105.13771v1
- Date: Fri, 28 May 2021 12:21:58 GMT
- Title: Chromatic and spatial analysis of one-pixel attacks against an image
classifier
- Authors: Janne Alatalo, Joni Korpihalkola, Tuomo Sipola, Tero Kokkonen
- Abstract summary: This research presents ways to analyze chromatic and spatial distributions of one-pixel attacks.
We show that the more effective attacks change the color of the pixel more, and that the successful attacks are situated at the center of the images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: One-pixel attack is a curious way of deceiving neural network classifier by
changing only one pixel in the input image. The full potential and boundaries
of this attack method are not yet fully understood. In this research, the
successful and unsuccessful attacks are studied in more detail to illustrate
the working mechanisms of a one-pixel attack. The data comes from our earlier
studies where we applied the attack against medical imaging. We used a real
breast cancer tissue dataset and a real classifier as the attack target. This
research presents ways to analyze chromatic and spatial distributions of
one-pixel attacks. In addition, we present one-pixel attack confidence maps to
illustrate the behavior of the target classifier. We show that the more
effective attacks change the color of the pixel more, and that the successful
attacks are situated at the center of the images. This kind of analysis is not
only useful for understanding the behavior of the attack but also the qualities
of the classifying neural network.
Related papers
- Pixle: a fast and effective black-box attack based on rearranging pixels [15.705568893476947]
Black-box adversarial attacks can be performed without knowing the inner structure of the attacked model.
We propose a novel attack that is capable of correctly attacking a high percentage of samples by rearranging a small number of pixels within the attacked image.
We demonstrate that our attack works on a large number of datasets and models, that it requires a small number of iterations, and that the distance between the original sample and the adversarial one is negligible to the human eye.
arXiv Detail & Related papers (2022-02-04T17:03:32Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Identification of Attack-Specific Signatures in Adversarial Examples [62.17639067715379]
We show that different attack algorithms produce adversarial examples which are distinct not only in their effectiveness but also in how they qualitatively affect their victims.
Our findings suggest that prospective adversarial attacks should be compared not only via their success rates at fooling models but also via deeper downstream effects they have on victims.
arXiv Detail & Related papers (2021-10-13T15:40:48Z) - Attack to Fool and Explain Deep Networks [59.97135687719244]
We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations.
Our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models.
arXiv Detail & Related papers (2021-06-20T03:07:36Z) - Deep neural network loses attention to adversarial images [11.650381752104296]
Adversarial algorithms have shown to be effective against neural networks for a variety of tasks.
We show that in the case of Pixel Attack, perturbed pixels call the network attention to themselves or divert the attention from them.
We also show that both attacks affect the saliency map and activation maps differently.
arXiv Detail & Related papers (2021-06-10T11:06:17Z) - PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack [37.15301296824337]
We propose a pixel correlation-based attentional black-box adversarial attack, termed as PICA.
PICA is more efficient to generate high-resolution adversarial examples compared with the existing black-box attacks.
arXiv Detail & Related papers (2021-01-19T09:53:52Z) - One-Pixel Attack Deceives Automatic Detection of Breast Cancer [0.0]
One-pixel attack is demonstrated in a real-life scenario with a real tumor dataset.
Results indicate that a minor one-pixel modification of a whole slide image under analysis can affect the diagnosis.
arXiv Detail & Related papers (2020-12-01T14:27:28Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Evading Deepfake-Image Detectors with White- and Black-Box Attacks [75.13740810603686]
We show that a popular forensic approach trains a neural network to distinguish real from synthetic content.
We develop five attack case studies on a state-of-the-art classifier that achieves an area under the ROC curve (AUC) of 0.95 on almost all existing image generators.
We also develop a black-box attack that, with no access to the target classifier, reduces the AUC to 0.22.
arXiv Detail & Related papers (2020-04-01T17:59:59Z) - Adversarial Attacks on Convolutional Neural Networks in Facial
Recognition Domain [2.4704085162861693]
Adversarial attacks that render Deep Neural Network (DNN) classifiers vulnerable in real life represent a serious threat in autonomous vehicles, malware filters, or biometric authentication systems.
We apply Fast Gradient Sign Method to introduce perturbations to a facial image dataset and then test the output on a different classifier.
We craft a variety of different black-box attack algorithms on a facial image dataset assuming minimal adversarial knowledge.
arXiv Detail & Related papers (2020-01-30T00:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.