Secure PAC Bayesian Regression via Real Shamir Secret Sharing
- URL: http://arxiv.org/abs/2109.11200v3
- Date: Mon, 17 Apr 2023 07:07:14 GMT
- Title: Secure PAC Bayesian Regression via Real Shamir Secret Sharing
- Authors: Jaron Skovsted Gundersen, Bulut Kuskonmaz, Rafael Wisniewski
- Abstract summary: We present a protocol for learning a linear model relying on recently described technique called real number secret sharing.
We consider the situation where several parties hold different data instances and they are not willing to give up the privacy of the data.
We suggest two methods; a secure inverse method and a secure Gaussian elimination method, and compare these methods at the end.
- Score: 2.578242050187029
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A common approach of system identification and machine learning is to
generate a model by using training data to predict the test data instances as
accurate as possible. Nonetheless, concerns about data privacy are increasingly
raised, but not always addressed. We present a secure protocol for learning a
linear model relying on recently described technique called real number secret
sharing. We take as our starting point the PAC Bayesian bounds and deduce a
closed form for the model parameters which depends on the data and the prior
from the PAC Bayesian bounds. To obtain the model parameters one needs to solve
a linear system. However, we consider the situation where several parties hold
different data instances and they are not willing to give up the privacy of the
data. Hence, we suggest to use real number secret sharing and multiparty
computation to share the data and solve the linear regression in a secure way
without violating the privacy of data. We suggest two methods; a secure inverse
method and a secure Gaussian elimination method, and compare these methods at
the end. The benefit of using secret sharing directly on real numbers is
reflected in the simplicity of the protocols and the number of rounds needed.
However, this comes with the drawback that a share might leak a small amount of
information, but in our analysis we argue that the leakage is small.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Share Your Representation Only: Guaranteed Improvement of the
Privacy-Utility Tradeoff in Federated Learning [47.042811490685324]
Mitigating the risk of this information leakage, using state of the art differentially private algorithms, also does not come for free.
In this paper, we consider a representation learning objective that various parties collaboratively refine on a federated model, with differential privacy guarantees.
We observe a significant performance improvement over the prior work under the same small privacy budget.
arXiv Detail & Related papers (2023-09-11T14:46:55Z) - Differentially Private Linear Regression with Linked Data [3.9325957466009203]
Differential privacy, a mathematical notion from computer science, is a rising tool offering robust privacy guarantees.
Recent work focuses on developing differentially private versions of individual statistical and machine learning tasks.
We present two differentially private algorithms for linear regression with linked data.
arXiv Detail & Related papers (2023-08-01T21:00:19Z) - Practical Privacy-Preserving Gaussian Process Regression via Secret
Sharing [23.80837224347696]
This paper proposes a privacy-preserving GPR method based on secret sharing (SS)
We derive a new SS-based exponentiation operation through the idea of 'confusion-correction' and construct an SS-based matrix inversion algorithm based on Cholesky decomposition.
Empirical results show that our proposed method can achieve reasonable accuracy and efficiency under the premise of preserving data privacy.
arXiv Detail & Related papers (2023-06-26T08:17:51Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine
Learning [0.0]
We present BEAS, the first blockchain-based framework for N-party Federated Learning.
It provides strict privacy guarantees of training data using gradient pruning.
Anomaly detection protocols are used to minimize the risk of data-poisoning attacks.
We also define a novel protocol to prevent premature convergence in heterogeneous learning environments.
arXiv Detail & Related papers (2022-02-06T17:11:14Z) - Secure Data Sharing With Flow Model [13.773737296144366]
In the classical multi-party computation setting, multiple parties jointly compute a function without revealing their own input data.
We consider a variant of this problem, where the input data can be shared for machine learning training purposes, but the data are also so that they cannot be recovered by other parties.
We present a rotation based method using flow model, and theoretically justified its security.
arXiv Detail & Related papers (2020-09-24T15:40:14Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z) - Privacy-Preserving Gaussian Process Regression -- A Modular Approach to
the Application of Homomorphic Encryption [4.1499725848998965]
Homomorphic encryption (FHE) allows data to be computed on whilst encrypted.
Some commonly used machine learning algorithms, such as Gaussian process regression, are poorly suited to FHE.
We show that a modular approach, which applies FHE to only the sensitive steps of a workflow that need protection, allows one party to make predictions on their data.
arXiv Detail & Related papers (2020-01-28T11:50:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.