FIBA: Frequency-Injection based Backdoor Attack in Medical Image
Analysis
- URL: http://arxiv.org/abs/2112.01148v1
- Date: Thu, 2 Dec 2021 11:52:17 GMT
- Title: FIBA: Frequency-Injection based Backdoor Attack in Medical Image
Analysis
- Authors: Yu Feng, Benteng Ma, Jing Zhang, Shanshan Zhao, Yong Xia, Dacheng Tao
- Abstract summary: We propose a novel Frequency-Injection based Backdoor Attack method (FIBA) that is capable of delivering attacks in various medical image analysis tasks.
Specifically, FIBA leverages a trigger function in the frequency domain that can inject the low-frequency information of a trigger image into the poisoned image by linearly combining the spectral amplitude of both images.
- Score: 82.2511780233828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, the security of AI systems has drawn increasing research
attention, especially in the medical imaging realm. To develop a secure medical
image analysis (MIA) system, it is a must to study possible backdoor attacks
(BAs), which can embed hidden malicious behaviors into the system. However,
designing a unified BA method that can be applied to various MIA systems is
challenging due to the diversity of imaging modalities (e.g., X-Ray, CT, and
MRI) and analysis tasks (e.g., classification, detection, and segmentation).
Most existing BA methods are designed to attack natural image classification
models, which apply spatial triggers to training images and inevitably corrupt
the semantics of poisoned pixels, leading to the failures of attacking dense
prediction models. To address this issue, we propose a novel
Frequency-Injection based Backdoor Attack method (FIBA) that is capable of
delivering attacks in various MIA tasks. Specifically, FIBA leverages a trigger
function in the frequency domain that can inject the low-frequency information
of a trigger image into the poisoned image by linearly combining the spectral
amplitude of both images. Since it preserves the semantics of the poisoned
image pixels, FIBA can perform attacks on both classification and dense
prediction models. Experiments on three benchmarks in MIA (i.e., ISIC-2019 for
skin lesion classification, KiTS-19 for kidney tumor segmentation, and EAD-2019
for endoscopic artifact detection), validate the effectiveness of FIBA and its
superiority over state-of-the-art methods in attacking MIA models as well as
bypassing backdoor defense. The code will be available at
https://github.com/HazardFY/FIBA.
Related papers
- BAPLe: Backdoor Attacks on Medical Foundational Models using Prompt Learning [71.60858267608306]
Medical foundation models are susceptible to backdoor attacks.
This work introduces a method to embed a backdoor into the medical foundation model during the prompt learning phase.
Our method, BAPLe, requires only a minimal subset of data to adjust the noise trigger and the text prompts for downstream tasks.
arXiv Detail & Related papers (2024-08-14T10:18:42Z) - Backdoor Attack with Mode Mixture Latent Modification [26.720292228686446]
We propose a backdoor attack paradigm that only requires minimal alterations to a clean model in order to inject the backdoor under the guise of fine-tuning.
We evaluate the effectiveness of our method on four popular benchmark datasets.
arXiv Detail & Related papers (2024-03-12T09:59:34Z) - Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! [52.0855711767075]
EvoSeed is an evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers.
arXiv Detail & Related papers (2024-02-07T09:39:29Z) - Susceptibility of Adversarial Attack on Medical Image Segmentation
Models [0.0]
We investigate the effect of adversarial attacks on segmentation models trained on MRI datasets.
We find that medical imaging segmentation models are indeed vulnerable to adversarial attacks.
We show that using a different loss function than the one used for training yields higher adversarial attack success.
arXiv Detail & Related papers (2024-01-20T12:52:20Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Mask and Restore: Blind Backdoor Defense at Test Time with Masked
Autoencoder [57.739693628523]
We propose a framework for blind backdoor defense with Masked AutoEncoder (BDMAE)
BDMAE detects possible triggers in the token space using image structural similarity and label consistency between the test image and MAE restorations.
Our approach is blind to the model restorations, trigger patterns and image benignity.
arXiv Detail & Related papers (2023-03-27T19:23:33Z) - Backdoor Attack on Hash-based Image Retrieval via Clean-label Data
Poisoning [54.15013757920703]
We propose the confusing perturbations-induced backdoor attack (CIBA)
It injects a small number of poisoned images with the correct label into the training data.
We have conducted extensive experiments to verify the effectiveness of our proposed CIBA.
arXiv Detail & Related papers (2021-09-18T07:56:59Z) - Adversarial attacks on deep learning models for fatty liver disease
classification by modification of ultrasound image reconstruction method [0.8431877864777443]
Convolutional neural networks (CNNs) have achieved remarkable success in medical image analysis tasks.
CNNs can be vulnerable to adversarial attacks, even small perturbations applied to input data may significantly affect model performance.
We devise a novel adversarial attack, specific to ultrasound (US) imaging.
arXiv Detail & Related papers (2020-09-07T18:35:35Z) - Systematic Evaluation of Backdoor Data Poisoning Attacks on Image
Classifiers [6.352532169433872]
Backdoor data poisoning attacks have been demonstrated in computer vision research as a potential safety risk for machine learning (ML) systems.
Our work builds upon prior backdoor data-poisoning research for ML image classifiers.
We find that poisoned models are hard to detect through performance inspection alone.
arXiv Detail & Related papers (2020-04-24T02:58:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.