Practical Privacy-Preserving Gaussian Process Regression via Secret
Sharing
- URL: http://arxiv.org/abs/2306.14498v1
- Date: Mon, 26 Jun 2023 08:17:51 GMT
- Title: Practical Privacy-Preserving Gaussian Process Regression via Secret
Sharing
- Authors: Jinglong Luo, Yehong Zhang, Jiaqi Zhang, Shuang Qin, Hui Wang, Yue Yu,
Zenglin Xu
- Abstract summary: This paper proposes a privacy-preserving GPR method based on secret sharing (SS)
We derive a new SS-based exponentiation operation through the idea of 'confusion-correction' and construct an SS-based matrix inversion algorithm based on Cholesky decomposition.
Empirical results show that our proposed method can achieve reasonable accuracy and efficiency under the premise of preserving data privacy.
- Score: 23.80837224347696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian process regression (GPR) is a non-parametric model that has been
used in many real-world applications that involve sensitive personal data
(e.g., healthcare, finance, etc.) from multiple data owners. To fully and
securely exploit the value of different data sources, this paper proposes a
privacy-preserving GPR method based on secret sharing (SS), a secure
multi-party computation (SMPC) technique. In contrast to existing studies that
protect the data privacy of GPR via homomorphic encryption, differential
privacy, or federated learning, our proposed method is more practical and can
be used to preserve the data privacy of both the model inputs and outputs for
various data-sharing scenarios (e.g., horizontally/vertically-partitioned
data). However, it is non-trivial to directly apply SS on the conventional GPR
algorithm, as it includes some operations whose accuracy and/or efficiency have
not been well-enhanced in the current SMPC protocol. To address this issue, we
derive a new SS-based exponentiation operation through the idea of
'confusion-correction' and construct an SS-based matrix inversion algorithm
based on Cholesky decomposition. More importantly, we theoretically analyze the
communication cost and the security of the proposed SS-based operations.
Empirical results show that our proposed method can achieve reasonable accuracy
and efficiency under the premise of preserving data privacy.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - DPGOMI: Differentially Private Data Publishing with Gaussian Optimized
Model Inversion [8.204115285718437]
We propose Differentially Private Data Publishing with Gaussian Optimized Model Inversion (DPGOMI) to address this issue.
Our approach involves mapping private data to the latent space using a public generator, followed by a lower-dimensional DP-GAN with better convergence properties.
Our results show that DPGOMI outperforms the standard DP-GAN method in terms of Inception Score, Freche't Inception Distance, and classification performance.
arXiv Detail & Related papers (2023-10-06T18:46:22Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - When approximate design for fast homomorphic computation provides
differential privacy guarantees [0.08399688944263842]
Differential privacy (DP) and cryptographic primitives are popular countermeasures against privacy attacks.
In this paper, we design SHIELD, a probabilistic approximation algorithm for the argmax operator.
Even if SHIELD could have other applications, we here focus on one setting and seamlessly integrate it in the SPEED collaborative training framework.
arXiv Detail & Related papers (2023-04-06T09:38:01Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Linear Model with Local Differential Privacy [0.225596179391365]
Privacy preserving techniques have been widely studied to analyze distributed data across different agencies.
Secure multiparty computation has been widely studied for privacy protection with high privacy level but intense cost.
matrix masking technique is applied to encrypt data such that the secure schemes are against malicious adversaries.
arXiv Detail & Related papers (2022-02-05T01:18:00Z) - Federated Deep Learning with Bayesian Privacy [28.99404058773532]
Federated learning (FL) aims to protect data privacy by cooperatively learning a model without sharing private data among users.
Homomorphic encryption (HE) based methods provide secure privacy protections but suffer from extremely high computational and communication overheads.
Deep learning with Differential Privacy (DP) was implemented as a practical learning algorithm at a manageable cost in complexity.
arXiv Detail & Related papers (2021-09-27T12:48:40Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.