Privacy-Preserving Federated Learning via Homomorphic Adversarial Networks
- URL: http://arxiv.org/abs/2412.01650v2
- Date: Tue, 03 Dec 2024 05:46:35 GMT
- Title: Privacy-Preserving Federated Learning via Homomorphic Adversarial Networks
- Authors: Wenhan Dong, Chao Lin, Xinlei He, Xinyi Huang, Shengmin Xu,
- Abstract summary: Homomorphic Adversarial Networks (HANs) are robust against privacy attacks.
HANs increase encryption aggregation speed by 6,075 times while incurring a 29.2 times increase in communication overhead.
Compared to traditional MK-HE schemes, HANs increase encryption aggregation speed by 6,075 times while incurring a 29.2 times increase in communication overhead.
- Score: 23.901391258240597
- License:
- Abstract: Privacy-preserving federated learning (PPFL) aims to train a global model for multiple clients while maintaining their data privacy. However, current PPFL protocols exhibit one or more of the following insufficiencies: considerable degradation in accuracy, the requirement for sharing keys, and cooperation during the key generation or decryption processes. As a mitigation, we develop the first protocol that utilizes neural networks to implement PPFL, as well as incorporating an Aggregatable Hybrid Encryption scheme tailored to the needs of PPFL. We name these networks as Homomorphic Adversarial Networks (HANs) which demonstrate that neural networks are capable of performing tasks similar to multi-key homomorphic encryption (MK-HE) while solving the problems of key distribution and collaborative decryption. Our experiments show that HANs are robust against privacy attacks. Compared with non-private federated learning, experiments conducted on multiple datasets demonstrate that HANs exhibit a negligible accuracy loss (at most 1.35%). Compared to traditional MK-HE schemes, HANs increase encryption aggregation speed by 6,075 times while incurring a 29.2 times increase in communication overhead.
Related papers
- A New Federated Learning Framework Against Gradient Inversion Attacks [17.3044168511991]
Federated Learning (FL) aims to protect data privacy by enabling clients to collectively train machine learning models without sharing their raw data.
Recent studies demonstrate that information exchanged during FL is subject to Gradient Inversion Attacks (GIA)
arXiv Detail & Related papers (2024-12-10T04:53:42Z) - DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing [51.336015600778396]
Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
arXiv Detail & Related papers (2024-10-21T16:25:14Z) - An Efficient and Multi-private Key Secure Aggregation for Federated Learning [41.29971745967693]
We propose an efficient and multi-private key secure aggregation scheme for federated learning.
Specifically, we skillfully modify the variant ElGamal encryption technique to achieve homomorphic addition operation.
For the high dimensional deep model parameter, we introduce a super-increasing sequence to compress multi-dimensional data into 1-D.
arXiv Detail & Related papers (2023-06-15T09:05:36Z) - FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users [19.209830150036254]
federated learning (FL) technique was developed to mitigate data privacy issues in the traditional machine learning paradigm.
Next-generation FL architectures proposed encryption and anonymization techniques to protect the model updates from the server.
This paper proposes a novel FL algorithm based on a fully homomorphic encryption (FHE) scheme.
arXiv Detail & Related papers (2023-06-08T11:20:00Z) - When approximate design for fast homomorphic computation provides
differential privacy guarantees [0.08399688944263842]
Differential privacy (DP) and cryptographic primitives are popular countermeasures against privacy attacks.
In this paper, we design SHIELD, a probabilistic approximation algorithm for the argmax operator.
Even if SHIELD could have other applications, we here focus on one setting and seamlessly integrate it in the SPEED collaborative training framework.
arXiv Detail & Related papers (2023-04-06T09:38:01Z) - On The Vulnerability of Recurrent Neural Networks to Membership
Inference Attacks [20.59493611017851]
We study the privacy implications of deploying recurrent neural networks in machine learning.
We consider membership inference attacks (MIAs) in which an attacker aims to infer whether a given data record has been used in the training of a learning agent.
arXiv Detail & Related papers (2021-10-06T20:20:35Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Sphynx: ReLU-Efficient Network Design for Private Inference [49.73927340643812]
We focus on private inference (PI), where the goal is to perform inference on a user's data sample using a service provider's model.
Existing PI methods for deep networks enable cryptographically secure inference with little drop in functionality.
This paper presents Sphynx, a ReLU-efficient network design method based on micro-search strategies for convolutional cell design.
arXiv Detail & Related papers (2021-06-17T18:11:10Z) - NeuraCrypt: Hiding Private Health Data via Random Neural Networks for
Public Training [64.54200987493573]
We propose NeuraCrypt, a private encoding scheme based on random deep neural networks.
NeuraCrypt encodes raw patient data using a randomly constructed neural network known only to the data-owner.
We show that NeuraCrypt achieves competitive accuracy to non-private baselines on a variety of x-ray tasks.
arXiv Detail & Related papers (2021-06-04T13:42:21Z) - Experimental quantum conference key agreement [55.41644538483948]
Quantum networks will provide multi-node entanglement over long distances to enable secure communication on a global scale.
Here we demonstrate quantum conference key agreement, a quantum communication protocol that exploits multi-partite entanglement.
We distribute four-photon Greenberger-Horne-Zeilinger (GHZ) states generated by high-brightness, telecom photon-pair sources across up to 50 km of fibre.
arXiv Detail & Related papers (2020-02-04T19:00:31Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.