A Secure Federated Learning Framework for 5G Networks
- URL: http://arxiv.org/abs/2005.05752v1
- Date: Tue, 12 May 2020 13:27:23 GMT
- Title: A Secure Federated Learning Framework for 5G Networks
- Authors: Yi Liu, Jialiang Peng, Jiawen Kang, Abdullah M. Iliyasu, Dusit Niyato,
and Ahmed A. Abd El-Latif
- Abstract summary: Federated Learning (FL) has been proposed as an emerging paradigm to build machine learning models using distributed training datasets.
There are two critical security threats: poisoning and membership inference attacks.
We propose a blockchain-based secure FL framework to create smart contracts and prevent malicious or unreliable participants from involving in FL.
- Score: 44.40119258491145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) has been recently proposed as an emerging paradigm to
build machine learning models using distributed training datasets that are
locally stored and maintained on different devices in 5G networks while
providing privacy preservation for participants. In FL, the central aggregator
accumulates local updates uploaded by participants to update a global model.
However, there are two critical security threats: poisoning and membership
inference attacks. These attacks may be carried out by malicious or unreliable
participants, resulting in the construction failure of global models or privacy
leakage of FL models. Therefore, it is crucial for FL to develop security means
of defense. In this article, we propose a blockchain-based secure FL framework
to create smart contracts and prevent malicious or unreliable participants from
involving in FL. In doing so, the central aggregator recognizes malicious and
unreliable participants by automatically executing smart contracts to defend
against poisoning attacks. Further, we use local differential privacy
techniques to prevent membership inference attacks. Numerical results suggest
that the proposed framework can effectively deter poisoning and membership
inference attacks, thereby improving the security of FL in 5G networks.
Related papers
- Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - SaFL: Sybil-aware Federated Learning with Application to Face
Recognition [13.914187113334222]
Federated Learning (FL) is a machine learning paradigm to conduct collaborative learning among clients on a joint model.
On the downside, FL raises security and privacy concerns that have just started to be studied.
This paper proposes a new defense method against poisoning attacks in FL called SaFL.
arXiv Detail & Related papers (2023-11-07T21:06:06Z) - SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks [12.580891810557482]
Federated learning (FL) is attractive for pulling privacy-preserving distributed training data.
We propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model.
We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against various poisoning attacks.
arXiv Detail & Related papers (2023-09-19T13:31:33Z) - FLShield: A Validation Based Federated Learning Framework to Defend
Against Poisoning Attacks [1.8925617030516926]
Federated learning (FL) is being used in many safety-critical domains such as autonomous vehicles and healthcare.
We propose a novel FL framework dubbed as FLShield that utilizes benign data from FL participants to validate the local models.
We conduct extensive experiments to evaluate our FLShield framework in different settings and demonstrate its effectiveness in thwarting various types of poisoning and backdoor attacks.
arXiv Detail & Related papers (2023-08-10T19:29:44Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - WW-FL: Secure and Private Large-Scale Federated Learning [15.412475066687723]
Federated learning (FL) is an efficient approach for large-scale distributed machine learning that promises data privacy by keeping training data on client devices.
Recent research has uncovered vulnerabilities in FL, impacting both security and privacy through poisoning attacks.
We propose WW-FL, an innovative framework that combines secure multi-party computation with hierarchical FL to guarantee data and global model privacy.
arXiv Detail & Related papers (2023-02-20T11:02:55Z) - Unraveling the Connections between Privacy and Certified Robustness in
Federated Learning Against Poisoning Attacks [68.20436971825941]
Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users.
Several studies have shown that FL is vulnerable to poisoning attacks.
To protect the privacy of local users, FL is usually trained in a differentially private way.
arXiv Detail & Related papers (2022-09-08T21:01:42Z) - Challenges and approaches for mitigating byzantine attacks in federated
learning [6.836162272841266]
Federated learning (FL) is an attractive distributed learning framework in which numerous wireless end-user devices can train a global model with the data remained autochthonous.
Despite the promising prospect, byzantine attack, an intractable threat in conventional distributed network, is discovered to be rather efficacious against FL as well.
We propose a new byzantine attack method called weight attack to defeat those defense schemes, and conduct experiments to demonstrate its threat.
arXiv Detail & Related papers (2021-12-29T09:24:05Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.