Dynamic backdoor attacks against federated learning
- URL: http://arxiv.org/abs/2011.07429v1
- Date: Sun, 15 Nov 2020 01:32:58 GMT
- Title: Dynamic backdoor attacks against federated learning
- Authors: Anbu Huang
- Abstract summary: Federated Learning (FL) is a new machine learning framework, which enables millions of participants to collaboratively train model without compromising data privacy and security.
In this paper, we focus on dynamic backdoor attacks under FL setting, where the goal of the adversary is to reduce the performance of the model on targeted tasks.
To the best of our knowledge, this is the first paper that focus on dynamic backdoor attacks research under FL setting.
- Score: 0.5482532589225553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a new machine learning framework, which enables
millions of participants to collaboratively train machine learning model
without compromising data privacy and security. Due to the independence and
confidentiality of each client, FL does not guarantee that all clients are
honest by design, which makes it vulnerable to adversarial attack naturally. In
this paper, we focus on dynamic backdoor attacks under FL setting, where the
goal of the adversary is to reduce the performance of the model on targeted
tasks while maintaining a good performance on the main task, current existing
studies are mainly focused on static backdoor attacks, that is the poison
pattern injected is unchanged, however, FL is an online learning framework, and
adversarial targets can be changed dynamically by attacker, traditional
algorithms require learning a new targeted task from scratch, which could be
computationally expensive and require a large number of adversarial training
examples, to avoid this, we bridge meta-learning and backdoor attacks under FL
setting, in which case we can learn a versatile model from previous
experiences, and fast adapting to new adversarial tasks with a few of examples.
We evaluate our algorithm on different datasets, and demonstrate that our
algorithm can achieve good results with respect to dynamic backdoor attacks. To
the best of our knowledge, this is the first paper that focus on dynamic
backdoor attacks research under FL setting.
Related papers
- Persistent Backdoor Attacks in Continual Learning [5.371962853011215]
We introduce two persistent backdoor attacks-Blind Task Backdoor and Latent Task Backdoor-each leveraging minimal adversarial influence.
Our results show that both attacks consistently achieve high success rates across different continual learning algorithms, while effectively evading state-of-the-art defenses.
arXiv Detail & Related papers (2024-09-20T19:28:48Z) - Non-Cooperative Backdoor Attacks in Federated Learning: A New Threat Landscape [7.00762739959285]
Federated Learning (FL) for privacy-preserving model training remains susceptible to backdoor attacks.
This research emphasizes the critical need for robust defenses against diverse backdoor attacks in the evolving FL landscape.
arXiv Detail & Related papers (2024-07-05T22:03:13Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Genetic Algorithm-Based Dynamic Backdoor Attack on Federated
Learning-Based Network Traffic Classification [1.1887808102491482]
We propose GABAttack, a novel genetic algorithm-based backdoor attack against federated learning for network traffic classification.
This research serves as an alarming call for network security experts and practitioners to develop robust defense measures against such attacks.
arXiv Detail & Related papers (2023-09-27T14:02:02Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Backdoor Attacks in Peer-to-Peer Federated Learning [11.235386862864397]
Peer-to-Peer Federated Learning (P2PFL) offer advantages in terms of both privacy and reliability.
We propose new backdoor attacks for P2PFL that leverage structural graph properties to select the malicious nodes, and achieve high attack success.
arXiv Detail & Related papers (2023-01-23T21:49:28Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - DST: Dynamic Substitute Training for Data-free Black-box Attack [79.61601742693713]
We propose a novel dynamic substitute training attack method to encourage substitute model to learn better and faster from the target model.
We introduce a task-driven graph-based structure information learning constrain to improve the quality of generated training data.
arXiv Detail & Related papers (2022-04-03T02:29:11Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Dynamic Defense Against Byzantine Poisoning Attacks in Federated
Learning [11.117880929232575]
Federated learning is vulnerable to Byzatine poisoning adversarial attacks.
We propose a dynamic aggregation operator that dynamically discards those adversarial clients.
The results show that the dynamic selection of the clients to aggregate enhances the performance of the global learning model.
arXiv Detail & Related papers (2020-07-29T18:02:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.