Backdoor Attack and Defense in Federated Generative Adversarial
Network-based Medical Image Synthesis
- URL: http://arxiv.org/abs/2210.10886v3
- Date: Sun, 16 Jul 2023 02:01:43 GMT
- Title: Backdoor Attack and Defense in Federated Generative Adversarial
Network-based Medical Image Synthesis
- Authors: Ruinan Jin and Xiaoxiao Li
- Abstract summary: Federated learning (FL) provides a way of training a central model using distributed data while keeping raw data locally.
It is vulnerable to backdoor attacks, an adversarial by poisoning training data.
Most backdoor attack strategies focus on classification models and centralized domains.
We propose FedDetect, an efficient and effective way of defending against the backdoor attack in the FL setting.
- Score: 15.41200827860072
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Learning-based image synthesis techniques have been applied in
healthcare research for generating medical images to support open research and
augment medical datasets. Training generative adversarial neural networks
(GANs) usually require large amounts of training data. Federated learning (FL)
provides a way of training a central model using distributed data while keeping
raw data locally. However, given that the FL server cannot access the raw data,
it is vulnerable to backdoor attacks, an adversarial by poisoning training
data. Most backdoor attack strategies focus on classification models and
centralized domains. It is still an open question if the existing backdoor
attacks can affect GAN training and, if so, how to defend against the attack in
the FL setting. In this work, we investigate the overlooked issue of backdoor
attacks in federated GANs (FedGANs). The success of this attack is subsequently
determined to be the result of some local discriminators overfitting the
poisoned data and corrupting the local GAN equilibrium, which then further
contaminates other clients when averaging the generator's parameters and yields
high generator loss. Therefore, we proposed FedDetect, an efficient and
effective way of defending against the backdoor attack in the FL setting, which
allows the server to detect the client's adversarial behavior based on their
losses and block the malicious clients. Our extensive experiments on two
medical datasets with different modalities demonstrate the backdoor attack on
FedGANs can result in synthetic images with low fidelity. After detecting and
suppressing the detected malicious clients using the proposed defense strategy,
we show that FedGANs can synthesize high-quality medical datasets (with labels)
for data augmentation to improve classification models' performance.
Related papers
- Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - Venomancer: Towards Imperceptible and Target-on-Demand Backdoor Attacks in Federated Learning [16.04315589280155]
We propose Venomancer, an effective backdoor attack that is imperceptible and allows target-on-demand.
The method is robust against state-of-the-art defenses such as Norm Clipping, Weak DP, Krum, Multi-Krum, RLR, FedRAD, Deepsight, and RFLBAT.
arXiv Detail & Related papers (2024-07-03T14:22:51Z) - Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning [20.69655306650485]
Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data.
Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks.
We propose DPOT, a backdoor attack strategy in FL that dynamically constructs backdoor objectives by optimizing a backdoor trigger.
arXiv Detail & Related papers (2024-05-10T02:44:25Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - FedDefender: Backdoor Attack Defense in Federated Learning [0.0]
Federated Learning (FL) is a privacy-preserving distributed machine learning technique.
We propose FedDefender, a defense mechanism against targeted poisoning attacks in FL.
arXiv Detail & Related papers (2023-07-02T03:40:04Z) - DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning [14.312593000209693]
Federated learning (FL) attempts to train a global model by aggregating local models from distributed devices under the coordination of a central server.
The existence of a large number of heterogeneous devices makes FL vulnerable to various attacks, especially the stealthy backdoor attack.
We propose a new attack model for FL, namely Data-Agnostic Backdoor attack at the Server (DABS), where the server directly modifies the global model to backdoor an FL system.
arXiv Detail & Related papers (2023-05-02T09:04:34Z) - Backdoor Attack is A Devil in Federated GAN-based Medical Image
Synthesis [15.41200827860072]
We propose a way of attacking federated GAN (FedGAN) by treating the discriminator with a commonly used data poisoning strategy in backdoor attack classification models.
We provide two effective defense strategies: global malicious detection and local training regularization.
arXiv Detail & Related papers (2022-07-02T07:20:35Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.