SaFL: Sybil-aware Federated Learning with Application to Face
Recognition
- URL: http://arxiv.org/abs/2311.04346v1
- Date: Tue, 7 Nov 2023 21:06:06 GMT
- Title: SaFL: Sybil-aware Federated Learning with Application to Face
Recognition
- Authors: Mahdi Ghafourian, Julian Fierrez, Ruben Vera-Rodriguez, Ruben
Tolosana, Aythami Morales
- Abstract summary: Federated Learning (FL) is a machine learning paradigm to conduct collaborative learning among clients on a joint model.
On the downside, FL raises security and privacy concerns that have just started to be studied.
This paper proposes a new defense method against poisoning attacks in FL called SaFL.
- Score: 13.914187113334222
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning (FL) is a machine learning paradigm to conduct
collaborative learning among clients on a joint model. The primary goal is to
share clients' local training parameters with an integrating server while
preserving their privacy. This method permits to exploit the potential of
massive mobile users' data for the benefit of machine learning models'
performance while keeping sensitive data on local devices. On the downside, FL
raises security and privacy concerns that have just started to be studied. To
address some of the key threats in FL, researchers have proposed to use secure
aggregation methods (e.g. homomorphic encryption, secure multiparty
computation, etc.). These solutions improve some security and privacy metrics,
but at the same time bring about other serious threats such as poisoning
attacks, backdoor attacks, and free running attacks. This paper proposes a new
defense method against poisoning attacks in FL called SaFL (Sybil-aware
Federated Learning) that minimizes the effect of sybils with a novel
time-variant aggregation scheme.
Related papers
- Security and Privacy Issues of Federated Learning [0.0]
Federated Learning (FL) has emerged as a promising approach to address data privacy and confidentiality concerns.
This paper presents a comprehensive taxonomy of security and privacy challenges in Federated Learning (FL) across various machine learning models.
arXiv Detail & Related papers (2023-07-22T22:51:07Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users [19.209830150036254]
federated learning (FL) technique was developed to mitigate data privacy issues in the traditional machine learning paradigm.
Next-generation FL architectures proposed encryption and anonymization techniques to protect the model updates from the server.
This paper proposes a novel FL algorithm based on a fully homomorphic encryption (FHE) scheme.
arXiv Detail & Related papers (2023-06-08T11:20:00Z) - Shielding Federated Learning Systems against Inference Attacks with ARM
TrustZone [0.0]
Federated Learning (FL) opens new perspectives for training machine learning models while keeping personal data on the users premises.
The long list of inference attacks that leak private data from gradients, published in the recent years, have emphasized the need of devising effective protection mechanisms.
We present GradSec, a solution that allows protecting in a TEE only sensitive layers of a machine learning model.
arXiv Detail & Related papers (2022-08-11T15:53:07Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Decepticons: Corrupted Transformers Breach Privacy in Federated Learning
for Language Models [58.631918656336005]
We propose a novel attack that reveals private user text by deploying malicious parameter vectors.
Unlike previous attacks on FL, the attack exploits characteristics of both the Transformer architecture and the token embedding.
arXiv Detail & Related papers (2022-01-29T22:38:21Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Achieving Security and Privacy in Federated Learning Systems: Survey,
Research Challenges and Future Directions [6.460846767084875]
Federated learning (FL) allows a server to learn a machine learning (ML) model across multiple decentralized clients.
In this paper, we first examine security and privacy attacks to FL and critically survey solutions proposed in the literature to mitigate each attack.
arXiv Detail & Related papers (2020-12-12T13:23:56Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - A Secure Federated Learning Framework for 5G Networks [44.40119258491145]
Federated Learning (FL) has been proposed as an emerging paradigm to build machine learning models using distributed training datasets.
There are two critical security threats: poisoning and membership inference attacks.
We propose a blockchain-based secure FL framework to create smart contracts and prevent malicious or unreliable participants from involving in FL.
arXiv Detail & Related papers (2020-05-12T13:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.