Practical Fast Gradient Sign Attack against Mammographic Image
Classifier
- URL: http://arxiv.org/abs/2001.09610v1
- Date: Mon, 27 Jan 2020 07:37:07 GMT
- Title: Practical Fast Gradient Sign Attack against Mammographic Image
Classifier
- Authors: Ibrahim Yilmaz
- Abstract summary: The motivation behind this paper is that we emphasize this issue and want to raise awareness.
We use mamographic images to train our model then evaluate our model performance in terms of accuracy.
We then using structural similarity index (SSIM) analyze similarity between clean images and adversarial images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) has been a topic of major research for many
years. Especially, with the emergence of deep neural network (DNN), these
studies have been tremendously successful. Today machines are capable of making
faster, more accurate decision than human. Thanks to the great development of
machine learning (ML) techniques, ML have been used many different fields such
as education, medicine, malware detection, autonomous car etc. In spite of
having this degree of interest and much successful research, ML models are
still vulnerable to adversarial attacks. Attackers can manipulate clean data in
order to fool the ML classifiers to achieve their desire target. For instance;
a benign sample can be modified as a malicious sample or a malicious one can be
altered as benign while this modification can not be recognized by human
observer. This can lead to many financial losses, or serious injuries, even
deaths. The motivation behind this paper is that we emphasize this issue and
want to raise awareness. Therefore, the security gap of mammographic image
classifier against adversarial attack is demonstrated. We use mamographic
images to train our model then evaluate our model performance in terms of
accuracy. Later on, we poison original dataset and generate adversarial samples
that missclassified by the model. We then using structural similarity index
(SSIM) analyze similarity between clean images and adversarial images. Finally,
we show how successful we are to misuse by using different poisoning factors.
Related papers
- How Aligned are Generative Models to Humans in High-Stakes Decision-Making? [10.225573060836478]
Large generative models (LMs) are increasingly being considered for high-stakes decision-making.
This work considers how such models compare to humans and predictive AI models on a specific case of recidivism prediction.
arXiv Detail & Related papers (2024-10-20T19:00:59Z) - Adversarial Attacks and Dimensionality in Text Classifiers [3.4179091429029382]
Adversarial attacks on machine learning algorithms have been a key deterrent to the adoption of AI in many real-world use cases.
We study adversarial examples in the field of natural language processing, specifically text classification tasks.
arXiv Detail & Related papers (2024-04-03T11:49:43Z) - Susceptibility of Adversarial Attack on Medical Image Segmentation
Models [0.0]
We investigate the effect of adversarial attacks on segmentation models trained on MRI datasets.
We find that medical imaging segmentation models are indeed vulnerable to adversarial attacks.
We show that using a different loss function than the one used for training yields higher adversarial attack success.
arXiv Detail & Related papers (2024-01-20T12:52:20Z) - Steganographic Capacity of Deep Learning Models [12.974139332068491]
We consider the steganographic capacity of several learning models.
We train a Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), and Transformer model on a challenging malware classification problem.
We find that the steganographic capacity of the learning models tested is surprisingly high, and that in each case, there is a clear threshold after which model performance rapidly degrades.
arXiv Detail & Related papers (2023-06-25T13:43:35Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - Mask and Restore: Blind Backdoor Defense at Test Time with Masked
Autoencoder [57.739693628523]
We propose a framework for blind backdoor defense with Masked AutoEncoder (BDMAE)
BDMAE detects possible triggers in the token space using image structural similarity and label consistency between the test image and MAE restorations.
Our approach is blind to the model restorations, trigger patterns and image benignity.
arXiv Detail & Related papers (2023-03-27T19:23:33Z) - FIBA: Frequency-Injection based Backdoor Attack in Medical Image
Analysis [82.2511780233828]
We propose a novel Frequency-Injection based Backdoor Attack method (FIBA) that is capable of delivering attacks in various medical image analysis tasks.
Specifically, FIBA leverages a trigger function in the frequency domain that can inject the low-frequency information of a trigger image into the poisoned image by linearly combining the spectral amplitude of both images.
arXiv Detail & Related papers (2021-12-02T11:52:17Z) - Multi-concept adversarial attacks [13.538643599990785]
Test time attacks targeting a single ML model often neglect their impact on other ML models.
We develop novel attack techniques that can simultaneously attack one set of ML models while preserving the accuracy of the other.
arXiv Detail & Related papers (2021-10-19T22:14:19Z) - Simulated Adversarial Testing of Face Recognition Models [53.10078734154151]
We propose a framework for learning how to test machine learning algorithms using simulators in an adversarial manner.
We are the first to show that weaknesses of models trained on real data can be discovered using simulated samples.
arXiv Detail & Related papers (2021-06-08T17:58:10Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z) - Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
Adversarial Attacks [154.31827097264264]
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms.
We propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model.
Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks.
arXiv Detail & Related papers (2020-09-05T06:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.