Backdoor Attack is A Devil in Federated GAN-based Medical Image
Synthesis
- URL: http://arxiv.org/abs/2207.00762v1
- Date: Sat, 2 Jul 2022 07:20:35 GMT
- Title: Backdoor Attack is A Devil in Federated GAN-based Medical Image
Synthesis
- Authors: Ruinan Jin, Xiaoxiao Li
- Abstract summary: We propose a way of attacking federated GAN (FedGAN) by treating the discriminator with a commonly used data poisoning strategy in backdoor attack classification models.
We provide two effective defense strategies: global malicious detection and local training regularization.
- Score: 15.41200827860072
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Learning-based image synthesis techniques have been applied in
healthcare research for generating medical images to support open research.
Training generative adversarial neural networks (GAN) usually requires large
amounts of training data. Federated learning (FL) provides a way of training a
central model using distributed data from different medical institutions while
keeping raw data locally. However, FL is vulnerable to backdoor attack, an
adversarial by poisoning training data, given the central server cannot access
the original data directly. Most backdoor attack strategies focus on
classification models and centralized domains. In this study, we propose a way
of attacking federated GAN (FedGAN) by treating the discriminator with a
commonly used data poisoning strategy in backdoor attack classification models.
We demonstrate that adding a small trigger with size less than 0.5 percent of
the original image size can corrupt the FL-GAN model. Based on the proposed
attack, we provide two effective defense strategies: global malicious detection
and local training regularization. We show that combining the two defense
strategies yields a robust medical image generation.
Related papers
- BAPLe: Backdoor Attacks on Medical Foundational Models using Prompt Learning [71.60858267608306]
Medical foundation models are susceptible to backdoor attacks.
This work introduces a method to embed a backdoor into the medical foundation model during the prompt learning phase.
Our method, BAPLe, requires only a minimal subset of data to adjust the noise trigger and the text prompts for downstream tasks.
arXiv Detail & Related papers (2024-08-14T10:18:42Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning [14.312593000209693]
Federated learning (FL) attempts to train a global model by aggregating local models from distributed devices under the coordination of a central server.
The existence of a large number of heterogeneous devices makes FL vulnerable to various attacks, especially the stealthy backdoor attack.
We propose a new attack model for FL, namely Data-Agnostic Backdoor attack at the Server (DABS), where the server directly modifies the global model to backdoor an FL system.
arXiv Detail & Related papers (2023-05-02T09:04:34Z) - Pseudo Label-Guided Model Inversion Attack via Conditional Generative
Adversarial Network [102.21368201494909]
Model inversion (MI) attacks have raised increasing concerns about privacy.
Recent MI attacks leverage a generative adversarial network (GAN) as an image prior to narrow the search space.
We propose Pseudo Label-Guided MI (PLG-MI) attack via conditional GAN (cGAN)
arXiv Detail & Related papers (2023-02-20T07:29:34Z) - Backdoor Attack and Defense in Federated Generative Adversarial
Network-based Medical Image Synthesis [15.41200827860072]
Federated learning (FL) provides a way of training a central model using distributed data while keeping raw data locally.
It is vulnerable to backdoor attacks, an adversarial by poisoning training data.
Most backdoor attack strategies focus on classification models and centralized domains.
We propose FedDetect, an efficient and effective way of defending against the backdoor attack in the FL setting.
arXiv Detail & Related papers (2022-10-19T21:03:34Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - Get your Foes Fooled: Proximal Gradient Split Learning for Defense
against Model Inversion Attacks on IoMT data [5.582293277542012]
In this work, we propose proximal gradient split learning (PSGL) method for defense against the model inversion attacks.
We propose the use of proximal gradient method to recover gradient maps and a decision-level fusion strategy to improve the recognition performance.
arXiv Detail & Related papers (2022-01-12T17:01:19Z) - Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning [3.376269351435396]
Federated learning (FL) aims to merge isolated data islands while maintaining data privacy.
Recent studies have revealed that Generative Adversarial Network (GAN) based attacks can be employed in FL to learn the distribution of private datasets.
We propose a framework, Anti-GAN, to prevent attackers from learning the real distribution of the victim's data.
arXiv Detail & Related papers (2020-04-27T03:45:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.