One-Pixel Attack Deceives Automatic Detection of Breast Cancer
- URL: http://arxiv.org/abs/2012.00517v2
- Date: Wed, 16 Dec 2020 09:42:34 GMT
- Title: One-Pixel Attack Deceives Automatic Detection of Breast Cancer
- Authors: Joni Korpihalkola, Tuomo Sipola, Samir Puuska, Tero Kokkonen
- Abstract summary: One-pixel attack is demonstrated in a real-life scenario with a real tumor dataset.
Results indicate that a minor one-pixel modification of a whole slide image under analysis can affect the diagnosis.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article we demonstrate that a state-of-the-art machine learning model
predicting whether a whole slide image contains mitosis can be fooled by
changing just a single pixel in the input image. Computer vision and machine
learning can be used to automate various tasks in cancer diagnostic and
detection. If an attacker can manipulate the automated processing, the results
can be devastating and in the worst case lead to wrong diagnostic and
treatments. In this research one-pixel attack is demonstrated in a real-life
scenario with a real tumor dataset. The results indicate that a minor one-pixel
modification of a whole slide image under analysis can affect the diagnosis.
The attack poses a threat from the cyber security perspective: the one-pixel
method can be used as an attack vector by a motivated attacker.
Related papers
- Fake It Until You Break It: On the Adversarial Robustness of AI-generated Image Detectors [14.284639462471274]
We evaluate state-of-the-art AI-generated image (AIGI) detectors under different attack scenarios.
Attacks can significantly reduce detection accuracy to the extent that the risks of relying on detectors outweigh their benefits.
We propose a simple defense mechanism to make CLIP-based detectors, which are currently the best-performing detectors, robust against these attacks.
arXiv Detail & Related papers (2024-10-02T14:11:29Z) - On the Detection of Image-Scaling Attacks in Machine Learning [11.103249083138213]
Image scaling is an integral part of machine learning and computer vision systems.
Image-scaling attacks modifying the entire scaled image can be reliably detected even under an adaptive adversary.
We show that our methods provide strong detection performance even if only minor parts of the image are manipulated.
arXiv Detail & Related papers (2023-10-23T16:46:28Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - Self-Supervised Masked Convolutional Transformer Block for Anomaly
Detection [122.4894940892536]
We present a novel self-supervised masked convolutional transformer block (SSMCTB) that comprises the reconstruction-based functionality at a core architectural level.
In this work, we extend our previous self-supervised predictive convolutional attentive block (SSPCAB) with a 3D masked convolutional layer, a transformer for channel-wise attention, as well as a novel self-supervised objective based on Huber loss.
arXiv Detail & Related papers (2022-09-25T04:56:10Z) - FIBA: Frequency-Injection based Backdoor Attack in Medical Image
Analysis [82.2511780233828]
We propose a novel Frequency-Injection based Backdoor Attack method (FIBA) that is capable of delivering attacks in various medical image analysis tasks.
Specifically, FIBA leverages a trigger function in the frequency domain that can inject the low-frequency information of a trigger image into the poisoned image by linearly combining the spectral amplitude of both images.
arXiv Detail & Related papers (2021-12-02T11:52:17Z) - Intrusion Detection: Machine Learning Baseline Calculations for Image
Classification [0.0]
Cyber security can be enhanced through application of machine learning.
Most promising candidates for consideration are Light Machine, Random Forest Boost, and Extra Trees.
arXiv Detail & Related papers (2021-11-03T17:49:38Z) - Chromatic and spatial analysis of one-pixel attacks against an image
classifier [0.0]
This research presents ways to analyze chromatic and spatial distributions of one-pixel attacks.
We show that the more effective attacks change the color of the pixel more, and that the successful attacks are situated at the center of the images.
arXiv Detail & Related papers (2021-05-28T12:21:58Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Convolutional-LSTM for Multi-Image to Single Output Medical Prediction [55.41644538483948]
A common scenario in developing countries is to have the volume metadata lost due multiple reasons.
It is possible to get a multi-image to single diagnostic model which mimics human doctor diagnostic process.
arXiv Detail & Related papers (2020-10-20T04:30:09Z) - Practical Fast Gradient Sign Attack against Mammographic Image
Classifier [0.0]
The motivation behind this paper is that we emphasize this issue and want to raise awareness.
We use mamographic images to train our model then evaluate our model performance in terms of accuracy.
We then using structural similarity index (SSIM) analyze similarity between clean images and adversarial images.
arXiv Detail & Related papers (2020-01-27T07:37:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.