Secure Aggregation Is Not All You Need: Mitigating Privacy Attacks with
Noise Tolerance in Federated Learning
- URL: http://arxiv.org/abs/2211.06324v1
- Date: Thu, 10 Nov 2022 05:13:08 GMT
- Title: Secure Aggregation Is Not All You Need: Mitigating Privacy Attacks with
Noise Tolerance in Federated Learning
- Authors: John Reuben Gilbert
- Abstract summary: Federated learning aims to preserve data privacy while creating AI models.
Current approaches rely heavily on secure aggregation protocols to preserve data privacy.
We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning is a collaborative method that aims to preserve data
privacy while creating AI models. Current approaches to federated learning tend
to rely heavily on secure aggregation protocols to preserve data privacy.
However, to some degree, such protocols assume that the entity orchestrating
the federated learning process (i.e., the server) is not fully malicious or
dishonest. We investigate vulnerabilities to secure aggregation that could
arise if the server is fully malicious and attempts to obtain access to
private, potentially sensitive data. Furthermore, we provide a method to
further defend against such a malicious server, and demonstrate effectiveness
against known attacks that reconstruct data in a federated learning setting.
Related papers
- Uncovering Attacks and Defenses in Secure Aggregation for Federated Deep Learning [17.45950557331482]
Federated learning enables the collaborative learning of a global model on diverse data, preserving data locality and eliminating the need to transfer user data to a central server.
Secure aggregation protocols are designed to mask/encrypt user updates and enable a central server to aggregate the masked information.
MicroSecAgg (PoPETS 2024) proposes a single server secure aggregation protocol that aims to mitigate the high communication complexity of the existing approaches.
arXiv Detail & Related papers (2024-10-13T00:06:03Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Protecting Split Learning by Potential Energy Loss [70.81375125791979]
We focus on the privacy leakage from the forward embeddings of split learning.
We propose the potential energy loss to make the forward embeddings become more 'complicated'
arXiv Detail & Related papers (2022-10-18T06:21:11Z) - Efficient and Privacy Preserving Group Signature for Federated Learning [2.121963121603413]
Federated Learning (FL) is a Machine Learning (ML) technique that aims to reduce the threats to user data privacy.
This paper proposes an efficient and privacy-preserving protocol for FL based on group signature.
arXiv Detail & Related papers (2022-07-12T04:12:10Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Collusion Resistant Federated Learning with Oblivious Distributed
Differential Privacy [4.951247283741297]
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model.
We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion.
We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets.
arXiv Detail & Related papers (2022-02-20T19:52:53Z) - Eluding Secure Aggregation in Federated Learning via Model Inconsistency [2.647302105102753]
Federated learning allows a set of users to train a deep neural network over their private training datasets.
We show that a malicious server can easily elude secure aggregation as if the latter were not in place.
We devise two different attacks capable of inferring information on individual private training datasets.
arXiv Detail & Related papers (2021-11-14T16:09:11Z) - Secure and Privacy-Preserving Federated Learning via Co-Utility [7.428782604099875]
We build a federated learning framework that offers privacy to the participating peers and security against Byzantine and poisoning attacks.
Unlike privacy protection via update aggregation, our approach preserves the values of model updates and hence the accuracy of plain federated learning.
arXiv Detail & Related papers (2021-08-04T08:58:24Z) - Secure Byzantine-Robust Machine Learning [61.03711813598128]
We propose a secure two-server protocol that offers both input privacy and Byzantine-robustness.
In addition, this protocol is communication-efficient, fault-tolerant and enjoys local differential privacy.
arXiv Detail & Related papers (2020-06-08T16:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.