FLShield: A Validation Based Federated Learning Framework to Defend
Against Poisoning Attacks
- URL: http://arxiv.org/abs/2308.05832v1
- Date: Thu, 10 Aug 2023 19:29:44 GMT
- Title: FLShield: A Validation Based Federated Learning Framework to Defend
Against Poisoning Attacks
- Authors: Ehsanul Kabir and Zeyu Song and Md Rafi Ur Rashid and Shagufta Mehnaz
- Abstract summary: Federated learning (FL) is being used in many safety-critical domains such as autonomous vehicles and healthcare.
We propose a novel FL framework dubbed as FLShield that utilizes benign data from FL participants to validate the local models.
We conduct extensive experiments to evaluate our FLShield framework in different settings and demonstrate its effectiveness in thwarting various types of poisoning and backdoor attacks.
- Score: 1.8925617030516926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is revolutionizing how we learn from data. With its
growing popularity, it is now being used in many safety-critical domains such
as autonomous vehicles and healthcare. Since thousands of participants can
contribute in this collaborative setting, it is, however, challenging to ensure
security and reliability of such systems. This highlights the need to design FL
systems that are secure and robust against malicious participants' actions
while also ensuring high utility, privacy of local data, and efficiency. In
this paper, we propose a novel FL framework dubbed as FLShield that utilizes
benign data from FL participants to validate the local models before taking
them into account for generating the global model. This is in stark contrast
with existing defenses relying on server's access to clean datasets -- an
assumption often impractical in real-life scenarios and conflicting with the
fundamentals of FL. We conduct extensive experiments to evaluate our FLShield
framework in different settings and demonstrate its effectiveness in thwarting
various types of poisoning and backdoor attacks including a defense-aware one.
FLShield also preserves privacy of local data against gradient inversion
attacks.
Related papers
- Enhancing Security and Privacy in Federated Learning using Update Digests and Voting-Based Defense [23.280147155814955]
Federated Learning (FL) is a promising privacy-preserving machine learning paradigm.
Despite its potential, FL faces challenges related to the trustworthiness of both clients and servers.
We introduce a novel framework named underlinetextbfFederated underlinetextbfLearning with underlinetextbfUpdate underlinetextbfDigest (FLUD)
FLUD addresses the critical issues of privacy preservation and resistance to Byzantine attacks within distributed learning environments.
arXiv Detail & Related papers (2024-05-29T06:46:10Z) - SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks [12.580891810557482]
Federated learning (FL) is attractive for pulling privacy-preserving distributed training data.
We propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model.
We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against various poisoning attacks.
arXiv Detail & Related papers (2023-09-19T13:31:33Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - WW-FL: Secure and Private Large-Scale Federated Learning [15.412475066687723]
Federated learning (FL) is an efficient approach for large-scale distributed machine learning that promises data privacy by keeping training data on client devices.
Recent research has uncovered vulnerabilities in FL, impacting both security and privacy through poisoning attacks.
We propose WW-FL, an innovative framework that combines secure multi-party computation with hierarchical FL to guarantee data and global model privacy.
arXiv Detail & Related papers (2023-02-20T11:02:55Z) - Unraveling the Connections between Privacy and Certified Robustness in
Federated Learning Against Poisoning Attacks [68.20436971825941]
Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users.
Several studies have shown that FL is vulnerable to poisoning attacks.
To protect the privacy of local users, FL is usually trained in a differentially private way.
arXiv Detail & Related papers (2022-09-08T21:01:42Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Challenges and approaches for mitigating byzantine attacks in federated
learning [6.836162272841266]
Federated learning (FL) is an attractive distributed learning framework in which numerous wireless end-user devices can train a global model with the data remained autochthonous.
Despite the promising prospect, byzantine attack, an intractable threat in conventional distributed network, is discovered to be rather efficacious against FL as well.
We propose a new byzantine attack method called weight attack to defeat those defense schemes, and conduct experiments to demonstrate its threat.
arXiv Detail & Related papers (2021-12-29T09:24:05Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Provable Defense against Privacy Leakage in Federated Learning from
Representation Perspective [47.23145404191034]
Federated learning (FL) is a popular distributed learning framework that can reduce privacy risks by not explicitly sharing private data.
Recent works demonstrated that sharing model updates makes FL vulnerable to inference attacks.
We show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.
arXiv Detail & Related papers (2020-12-08T20:42:12Z) - A Secure Federated Learning Framework for 5G Networks [44.40119258491145]
Federated Learning (FL) has been proposed as an emerging paradigm to build machine learning models using distributed training datasets.
There are two critical security threats: poisoning and membership inference attacks.
We propose a blockchain-based secure FL framework to create smart contracts and prevent malicious or unreliable participants from involving in FL.
arXiv Detail & Related papers (2020-05-12T13:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.