The Security of Deep Learning Defences for Medical Imaging
- URL: http://arxiv.org/abs/2201.08661v1
- Date: Fri, 21 Jan 2022 12:11:17 GMT
- Title: The Security of Deep Learning Defences for Medical Imaging
- Authors: Moshe Levy, Guy Amit, Yuval Elovici, Yisroel Mirsky
- Abstract summary: We show that an informed attacker can evade five of the current state of the art defences while successfully fooling the victim's deep learning model.
We suggest better alternatives for securing healthcare DNNs from such attacks: (1) harden the system's security and (2) use digital signatures.
- Score: 36.060636819669604
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning has shown great promise in the domain of medical image
analysis. Medical professionals and healthcare providers have been adopting the
technology to speed up and enhance their work. These systems use deep neural
networks (DNN) which are vulnerable to adversarial samples; images with
imperceivable changes that can alter the model's prediction. Researchers have
proposed defences which either make a DNN more robust or detect the adversarial
samples before they do harm. However, none of these works consider an informed
attacker which can adapt to the defence mechanism. We show that an informed
attacker can evade five of the current state of the art defences while
successfully fooling the victim's deep learning model, rendering these defences
useless. We then suggest better alternatives for securing healthcare DNNs from
such attacks: (1) harden the system's security and (2) use digital signatures.
Related papers
- Securing the Diagnosis of Medical Imaging: An In-depth Analysis of AI-Resistant Attacks [0.0]
It's common knowledge that attackers might cause misclassification by deliberately creating inputs for machine learning classifiers.
Recent arguments have suggested that adversarial attacks could be made against medical image analysis technologies.
It is essential to assess how strong medical DNN tasks are against adversarial attacks.
arXiv Detail & Related papers (2024-08-01T07:37:27Z) - Efficient Defense Against Model Stealing Attacks on Convolutional Neural
Networks [0.548924822963045]
Model stealing attacks can lead to intellectual property theft and other security and privacy risks.
Current state-of-the-art defenses against model stealing attacks suggest adding perturbations to the prediction probabilities.
We propose a simple yet effective and efficient defense alternative.
arXiv Detail & Related papers (2023-09-04T22:25:49Z) - ATWM: Defense against adversarial malware based on adversarial training [16.16005518623829]
Deep learning models are vulnerable to adversarial example attacks.
This paper proposes an adversarial malware defense method based on adversarial training.
The results show that the method in this paper can improve the adversarial defense capability of the model without reducing the accuracy of the model.
arXiv Detail & Related papers (2023-07-11T08:07:10Z) - The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks [91.56314751983133]
$A5$ is a framework to craft a defensive perturbation to guarantee that any attack towards the input in hand will fail.
We show effective on-the-fly defensive augmentation with a robustifier network that ignores the ground truth label.
We also show how to apply $A5$ to create certifiably robust physical objects.
arXiv Detail & Related papers (2023-05-23T16:07:58Z) - Visually Adversarial Attacks and Defenses in the Physical World: A
Survey [27.40548512511512]
The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms.
In this paper, we summarize a survey versus the current physically adversarial attacks and physically adversarial defenses in computer vision.
arXiv Detail & Related papers (2022-11-03T09:28:45Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z) - Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models [8.853343040790795]
Jekyll is a neural style transfer framework that takes as input a biomedical image of a patient and translates it to a new image that indicates an attacker-chosen disease condition.
We show that these attacks manage to mislead both medical professionals and algorithmic detection schemes.
We also investigate defensive measures based on machine learning to detect images generated by Jekyll.
arXiv Detail & Related papers (2021-04-05T18:23:36Z) - What Doesn't Kill You Makes You Robust(er): Adversarial Training against
Poisons and Backdoors [57.040948169155925]
We extend the adversarial training framework to defend against (training-time) poisoning and backdoor attacks.
Our method desensitizes networks to the effects of poisoning by creating poisons during training and injecting them into training batches.
We show that this defense withstands adaptive attacks, generalizes to diverse threat models, and incurs a better performance trade-off than previous defenses.
arXiv Detail & Related papers (2021-02-26T17:54:36Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.