Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges
and Future Research Directions
- URL: http://arxiv.org/abs/2303.02213v1
- Date: Fri, 3 Mar 2023 20:54:28 GMT
- Title: Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges
and Future Research Directions
- Authors: Thuy Dung Nguyen, Tuan Nguyen, Phi Le Nguyen, Hieu H. Pham, Khoa Doan,
Kok-Seng Wong
- Abstract summary: Federated learning (FL) is a machine learning (ML) approach that allows the use of distributed data without compromising personal privacy.
The heterogeneous distribution of data among clients in FL can make it difficult for the orchestration server to validate the integrity of local model updates.
Backdoor attacks involve the insertion of malicious functionality into a targeted model through poisoned updates from malicious clients.
- Score: 3.6086478979425998
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a machine learning (ML) approach that allows the
use of distributed data without compromising personal privacy. However, the
heterogeneous distribution of data among clients in FL can make it difficult
for the orchestration server to validate the integrity of local model updates,
making FL vulnerable to various threats, including backdoor attacks. Backdoor
attacks involve the insertion of malicious functionality into a targeted model
through poisoned updates from malicious clients. These attacks can cause the
global model to misbehave on specific inputs while appearing normal in other
cases. Backdoor attacks have received significant attention in the literature
due to their potential to impact real-world deep learning applications.
However, they have not been thoroughly studied in the context of FL. In this
survey, we provide a comprehensive survey of current backdoor attack strategies
and defenses in FL, including a comprehensive analysis of different approaches.
We also discuss the challenges and potential future directions for attacks and
defenses in the context of FL.
Related papers
- BadSFL: Backdoor Attack against Scaffold Federated Learning [16.104941796138128]
Federated learning (FL) enables the training of deep learning models on distributed clients to preserve data privacy.
BadSFL is a novel backdoor attack method designed for the FL framework using the scaffold aggregation algorithm in non-IID settings.
BadSFL is effective over 60 rounds in the global model and up to 3 times longer than existing baseline attacks.
arXiv Detail & Related papers (2024-11-25T07:46:57Z) - Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning [20.69655306650485]
Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data.
Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks.
We propose DPOT, a backdoor attack strategy in FL that dynamically constructs backdoor objectives by optimizing a backdoor trigger.
arXiv Detail & Related papers (2024-05-10T02:44:25Z) - Data and Model Poisoning Backdoor Attacks on Wireless Federated
Learning, and the Defense Mechanisms: A Comprehensive Survey [28.88186038735176]
Federated Learning (FL) has been increasingly considered for applications to wireless communication networks (WCNs)
In general, non-independent and identically distributed (non-IID) data of WCNs raises concerns about robustness.
This survey provides a comprehensive review of the latest backdoor attacks and defense mechanisms.
arXiv Detail & Related papers (2023-12-14T05:52:29Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Privacy Inference-Empowered Stealthy Backdoor Attack on Federated
Learning under Non-IID Scenarios [7.161511745025332]
Federated learning (FL) naturally faces the problem of data heterogeneity in real-world scenarios.
In this paper, we propose a privacy inference-empowered stealthy backdoor attack scheme for FL under non-IID scenarios.
Experiments based on MNIST, CIFAR10 and Youtube Aligned Face datasets demonstrate that the proposed PI-SBA scheme is effective in non-IID FL and stealthy against state-of-the-art defense methods.
arXiv Detail & Related papers (2023-06-13T11:08:30Z) - Backdoor Attacks in Peer-to-Peer Federated Learning [11.235386862864397]
Peer-to-Peer Federated Learning (P2PFL) offer advantages in terms of both privacy and reliability.
We propose new backdoor attacks for P2PFL that leverage structural graph properties to select the malicious nodes, and achieve high attack success.
arXiv Detail & Related papers (2023-01-23T21:49:28Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Backdoor Learning: A Survey [75.59571756777342]
Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs)
Backdoor learning is an emerging and rapidly growing research area.
This paper presents the first comprehensive survey of this realm.
arXiv Detail & Related papers (2020-07-17T04:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.