When approximate design for fast homomorphic computation provides
differential privacy guarantees
- URL: http://arxiv.org/abs/2304.02959v1
- Date: Thu, 6 Apr 2023 09:38:01 GMT
- Title: When approximate design for fast homomorphic computation provides
differential privacy guarantees
- Authors: Arnaud Grivet S\'ebert, Martin Zuber, Oana Stan, Renaud Sirdey,
C\'edric Gouy-Pailler
- Abstract summary: Differential privacy (DP) and cryptographic primitives are popular countermeasures against privacy attacks.
In this paper, we design SHIELD, a probabilistic approximation algorithm for the argmax operator.
Even if SHIELD could have other applications, we here focus on one setting and seamlessly integrate it in the SPEED collaborative training framework.
- Score: 0.08399688944263842
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: While machine learning has become pervasive in as diversified fields as
industry, healthcare, social networks, privacy concerns regarding the training
data have gained a critical importance. In settings where several parties wish
to collaboratively train a common model without jeopardizing their sensitive
data, the need for a private training protocol is particularly stringent and
implies to protect the data against both the model's end-users and the actors
of the training phase. Differential privacy (DP) and cryptographic primitives
are complementary popular countermeasures against privacy attacks. Among these
cryptographic primitives, fully homomorphic encryption (FHE) offers ciphertext
malleability at the cost of time-consuming operations in the homomorphic
domain. In this paper, we design SHIELD, a probabilistic approximation
algorithm for the argmax operator which is both fast when homomorphically
executed and whose inaccuracy is used as a feature to ensure DP guarantees.
Even if SHIELD could have other applications, we here focus on one setting and
seamlessly integrate it in the SPEED collaborative training framework from
"SPEED: Secure, PrivatE, and Efficient Deep learning" (Grivet S\'ebert et al.,
2021) to improve its computational efficiency. After thoroughly describing the
FHE implementation of our algorithm and its DP analysis, we present
experimental results. To the best of our knowledge, it is the first work in
which relaxing the accuracy of an homomorphic calculation is constructively
usable as a degree of freedom to achieve better FHE performances.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Practical, Private Assurance of the Value of Collaboration via Fully Homomorphic Encryption [3.929854470352013]
Two parties wish to collaborate on their datasets.
One party is promised an improvement on its prediction model by incorporating data from the other party.
The parties would only wish to collaborate further if the updated model shows an improvement in accuracy.
arXiv Detail & Related papers (2023-10-04T03:47:21Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Practical Privacy-Preserving Gaussian Process Regression via Secret
Sharing [23.80837224347696]
This paper proposes a privacy-preserving GPR method based on secret sharing (SS)
We derive a new SS-based exponentiation operation through the idea of 'confusion-correction' and construct an SS-based matrix inversion algorithm based on Cholesky decomposition.
Empirical results show that our proposed method can achieve reasonable accuracy and efficiency under the premise of preserving data privacy.
arXiv Detail & Related papers (2023-06-26T08:17:51Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption [112.02441503951297]
Privacy-preserving inference of transformer models is on the demand of cloud service users.
We introduce $textitTHE-X$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models.
arXiv Detail & Related papers (2022-06-01T03:49:18Z) - Protecting Data from all Parties: Combining FHE and DP in Federated
Learning [0.09176056742068812]
We propose a secure framework addressing an extended threat model with respect to privacy of the training data.
The proposed framework protects the privacy of the training data from all participants, namely the training data owners and an aggregating server.
By means of a novel quantization operator, we prove differential privacy guarantees in a context where the noise is quantified and bounded due to the use of homomorphic encryption.
arXiv Detail & Related papers (2022-05-09T14:33:44Z) - Differentially Private Federated Learning on Heterogeneous Data [10.431137628048356]
Federated Learning (FL) is a paradigm for large-scale distributed learning.
It faces two key challenges: (i) efficient training from highly heterogeneous user data, and (ii) protecting the privacy of participating users.
We propose a novel FL approach to tackle these two challenges together by incorporating Differential Privacy (DP) constraints.
arXiv Detail & Related papers (2021-11-17T18:23:49Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z) - User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization [77.43075255745389]
Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
arXiv Detail & Related papers (2020-02-29T10:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.