Challenges and approaches for mitigating byzantine attacks in federated
learning
- URL: http://arxiv.org/abs/2112.14468v1
- Date: Wed, 29 Dec 2021 09:24:05 GMT
- Title: Challenges and approaches for mitigating byzantine attacks in federated
learning
- Authors: Shengshan Hu and Jianrong Lu and Wei Wan and Leo Yu Zhang
- Abstract summary: Federated learning (FL) is an attractive distributed learning framework in which numerous wireless end-user devices can train a global model with the data remained autochthonous.
Despite the promising prospect, byzantine attack, an intractable threat in conventional distributed network, is discovered to be rather efficacious against FL as well.
We propose a new byzantine attack method called weight attack to defeat those defense schemes, and conduct experiments to demonstrate its threat.
- Score: 6.836162272841266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently emerged federated learning (FL) is an attractive distributed
learning framework in which numerous wireless end-user devices can train a
global model with the data remained autochthonous. Compared with the
traditional machine learning framework that collects user data for centralized
storage, which brings huge communication burden and concerns about data
privacy, this approach can not only save the network bandwidth but also protect
the data privacy. Despite the promising prospect, byzantine attack, an
intractable threat in conventional distributed network, is discovered to be
rather efficacious against FL as well. In this paper, we conduct a
comprehensive investigation of the state-of-the-art strategies for defending
against byzantine attacks in FL. We first provide a taxonomy for the existing
defense solutions according to the techniques they used, followed by an
across-the-board comparison and discussion. Then we propose a new byzantine
attack method called weight attack to defeat those defense schemes, and conduct
experiments to demonstrate its threat. The results show that existing defense
solutions, although abundant, are still far from fully protecting FL. Finally,
we indicate possible countermeasures for weight attack, and highlight several
challenges and future research directions for mitigating byzantine attacks in
FL.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Celtibero: Robust Layered Aggregation for Federated Learning [0.0]
We introduce Celtibero, a novel defense mechanism that integrates layered aggregation to enhance robustness against adversarial manipulation.
We demonstrate that Celtibero consistently achieves high main task accuracy (MTA) while maintaining minimal attack success rates (ASR) across a range of untargeted and targeted poisoning attacks.
arXiv Detail & Related papers (2024-08-26T12:54:00Z) - Defending against Data Poisoning Attacks in Federated Learning via User Elimination [0.0]
This paper introduces a novel framework focused on the strategic elimination of adversarial users within a federated model.
We detect anomalies in the aggregation phase of the Federated Algorithm, by integrating metadata gathered by the local training instances with Differential Privacy techniques.
Our experiments demonstrate the efficacy of our methods, significantly mitigating the risk of data poisoning while maintaining user privacy and model performance.
arXiv Detail & Related papers (2024-04-19T10:36:00Z) - Securing NextG Systems against Poisoning Attacks on Federated Learning:
A Game-Theoretic Solution [9.800359613640763]
This paper studies the poisoning attack and defense interactions in a federated learning (FL) system.
FL collectively trains a global model without the need for clients to exchange their data samples.
The presence of malicious clients introduces the risk of poisoning the training data to manipulate the global model through falsified local model exchanges.
arXiv Detail & Related papers (2023-12-28T17:52:21Z) - Data and Model Poisoning Backdoor Attacks on Wireless Federated
Learning, and the Defense Mechanisms: A Comprehensive Survey [28.88186038735176]
Federated Learning (FL) has been increasingly considered for applications to wireless communication networks (WCNs)
In general, non-independent and identically distributed (non-IID) data of WCNs raises concerns about robustness.
This survey provides a comprehensive review of the latest backdoor attacks and defense mechanisms.
arXiv Detail & Related papers (2023-12-14T05:52:29Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - FLShield: A Validation Based Federated Learning Framework to Defend
Against Poisoning Attacks [1.8925617030516926]
Federated learning (FL) is being used in many safety-critical domains such as autonomous vehicles and healthcare.
We propose a novel FL framework dubbed as FLShield that utilizes benign data from FL participants to validate the local models.
We conduct extensive experiments to evaluate our FLShield framework in different settings and demonstrate its effectiveness in thwarting various types of poisoning and backdoor attacks.
arXiv Detail & Related papers (2023-08-10T19:29:44Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.