Preserving Privacy and Security in Federated Learning
- URL: http://arxiv.org/abs/2202.03402v3
- Date: Tue, 29 Aug 2023 00:03:00 GMT
- Title: Preserving Privacy and Security in Federated Learning
- Authors: Truc Nguyen, My T. Thai
- Abstract summary: We develop a principle framework that offers both privacy guarantees for users and detection against poisoning attacks from them.
Our framework enables the central server to identify poisoned model updates without violating the privacy guarantees of secure aggregation.
- Score: 21.241705771577116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is known to be vulnerable to both security and privacy
issues. Existing research has focused either on preventing poisoning attacks
from users or on concealing the local model updates from the server, but not
both. However, integrating these two lines of research remains a crucial
challenge since they often conflict with one another with respect to the threat
model. In this work, we develop a principle framework that offers both privacy
guarantees for users and detection against poisoning attacks from them. With a
new threat model that includes both an honest-but-curious server and malicious
users, we first propose a secure aggregation protocol using homomorphic
encryption for the server to combine local model updates in a private manner.
Then, a zero-knowledge proof protocol is leveraged to shift the task of
detecting attacks in the local models from the server to the users. The key
observation here is that the server no longer needs access to the local models
for attack detection. Therefore, our framework enables the central server to
identify poisoned model updates without violating the privacy guarantees of
secure aggregation.
Related papers
- PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - A New Implementation of Federated Learning for Privacy and Security
Enhancement [27.612480082254486]
Federated learning (FL) has emerged as a new machine learning setting.
No local data needs to be shared, and privacy can be well protected.
We propose a model update based federated averaging algorithm to defend against Byzantine attacks.
arXiv Detail & Related papers (2022-08-03T03:13:19Z) - Eluding Secure Aggregation in Federated Learning via Model Inconsistency [2.647302105102753]
Federated learning allows a set of users to train a deep neural network over their private training datasets.
We show that a malicious server can easily elude secure aggregation as if the latter were not in place.
We devise two different attacks capable of inferring information on individual private training datasets.
arXiv Detail & Related papers (2021-11-14T16:09:11Z) - Secure and Privacy-Preserving Federated Learning via Co-Utility [7.428782604099875]
We build a federated learning framework that offers privacy to the participating peers and security against Byzantine and poisoning attacks.
Unlike privacy protection via update aggregation, our approach preserves the values of model updates and hence the accuracy of plain federated learning.
arXiv Detail & Related papers (2021-08-04T08:58:24Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z) - Learning to Detect Malicious Clients for Robust Federated Learning [20.5238037608738]
Federated learning systems are vulnerable to attacks from malicious clients.
We propose a new framework for robust federated learning where the central server learns to detect and remove the malicious model updates.
arXiv Detail & Related papers (2020-02-01T14:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.