Protecting Data from all Parties: Combining FHE and DP in Federated
Learning
- URL: http://arxiv.org/abs/2205.04330v1
- Date: Mon, 9 May 2022 14:33:44 GMT
- Title: Protecting Data from all Parties: Combining FHE and DP in Federated
Learning
- Authors: Arnaud Grivet S\'ebert, Renaud Sirdey, Oana Stan, C\'edric
Gouy-Pailler
- Abstract summary: We propose a secure framework addressing an extended threat model with respect to privacy of the training data.
The proposed framework protects the privacy of the training data from all participants, namely the training data owners and an aggregating server.
By means of a novel quantization operator, we prove differential privacy guarantees in a context where the noise is quantified and bounded due to the use of homomorphic encryption.
- Score: 0.09176056742068812
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper tackles the problem of ensuring training data privacy in a
federated learning context. Relying on Fully Homomorphic Encryption (FHE) and
Differential Privacy (DP), we propose a secure framework addressing an extended
threat model with respect to privacy of the training data. Notably, the
proposed framework protects the privacy of the training data from all
participants, namely the training data owners and an aggregating server. In
details, while homomorphic encryption blinds a semi-honest server at learning
stage, differential privacy protects the data from semi-honest clients
participating in the training process as well as curious end-users with
black-box or white-box access to the trained model. This paper provides with
new theoretical and practical results to enable these techniques to be
effectively combined. In particular, by means of a novel stochastic
quantization operator, we prove differential privacy guarantees in a context
where the noise is quantified and bounded due to the use of homomorphic
encryption. The paper is concluded by experiments which show the practicality
of the entire framework in spite of these interferences in terms of both model
quality (impacted by DP) and computational overheads (impacted by FHE).
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Pencil: Private and Extensible Collaborative Learning without the Non-Colluding Assumption [24.339382371386876]
Pencil is the first private training framework for collaborative learning that simultaneously offers data privacy, model privacy, and extensibility to multiple data providers.
We introduce several novel cryptographic protocols to realize this design principle and conduct a rigorous security and privacy analysis.
Pencil achieves 10 260x higher throughput and 2 orders of magnitude less communication than prior art.
arXiv Detail & Related papers (2024-03-17T10:26:41Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - When approximate design for fast homomorphic computation provides
differential privacy guarantees [0.08399688944263842]
Differential privacy (DP) and cryptographic primitives are popular countermeasures against privacy attacks.
In this paper, we design SHIELD, a probabilistic approximation algorithm for the argmax operator.
Even if SHIELD could have other applications, we here focus on one setting and seamlessly integrate it in the SPEED collaborative training framework.
arXiv Detail & Related papers (2023-04-06T09:38:01Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Privacy-Preserving Wavelet Wavelet Neural Network with Fully Homomorphic
Encryption [5.010425616264462]
Privacy-Preserving Machine Learning (PPML) aims to protect the privacy and provide security to the data used in building Machine Learning models.
We propose a fully homomorphic encrypted wavelet neural network to protect privacy and at the same time not compromise on the efficiency of the model.
arXiv Detail & Related papers (2022-05-26T10:40:31Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - A Privacy-Preserving and Trustable Multi-agent Learning Framework [34.28936739262812]
This paper presents Privacy-preserving and trustable Distributed Learning (PT-DL)
PT-DL is a fully decentralized framework that relies on Differential Privacy to guarantee strong privacy protections of the agents' data.
The paper shows that PT-DL is resilient up to a 50% collusion attack, with high probability, in a malicious trust model.
arXiv Detail & Related papers (2021-06-02T15:46:27Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.