Online Efficient Secure Logistic Regression based on Function Secret Sharing
- URL: http://arxiv.org/abs/2309.09486v1
- Date: Mon, 18 Sep 2023 04:50:54 GMT
- Title: Online Efficient Secure Logistic Regression based on Function Secret Sharing
- Authors: Jing Liu, Jamie Cui, Cen Chen,
- Abstract summary: We propose an online efficient protocol for privacy-preserving logistic regression based on Function Secret Sharing (FSS)
Our protocols are designed in the two non-colluding servers setting and assume the existence of a third-party dealer.
We propose accurate and MPC-friendly alternatives to the sigmoid function and encapsulate the logistic regression training process into a function secret sharing gate.
- Score: 15.764294489590041
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Logistic regression is an algorithm widely used for binary classification in various real-world applications such as fraud detection, medical diagnosis, and recommendation systems. However, training a logistic regression model with data from different parties raises privacy concerns. Secure Multi-Party Computation (MPC) is a cryptographic tool that allows multiple parties to train a logistic regression model jointly without compromising privacy. The efficiency of the online training phase becomes crucial when dealing with large-scale data in practice. In this paper, we propose an online efficient protocol for privacy-preserving logistic regression based on Function Secret Sharing (FSS). Our protocols are designed in the two non-colluding servers setting and assume the existence of a third-party dealer who only poses correlated randomness to the computing parties. During the online phase, two servers jointly train a logistic regression model on their private data by utilizing pre-generated correlated randomness. Furthermore, we propose accurate and MPC-friendly alternatives to the sigmoid function and encapsulate the logistic regression training process into a function secret sharing gate. The online communication overhead significantly decreases compared with the traditional secure logistic regression training based on secret sharing. We provide both theoretical and experimental analyses to demonstrate the efficiency and effectiveness of our method.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - LFFR: Logistic Function For (multi-output) Regression [0.0]
We build upon previous work on privacy-preserving regression to address multi-output regression problems.
We adapt our novel LFFR algorithm, initially designed for single-output logistic regression, to handle multiple outputs.
Evaluations on multiple real-world datasets demonstrate the effectiveness of our multi-output LFFR algorithm.
arXiv Detail & Related papers (2024-07-30T20:52:38Z) - Hawk: Accurate and Fast Privacy-Preserving Machine Learning Using Secure Lookup Table Computation [11.265356632908846]
Training machine learning models on data from multiple entities without direct data sharing can unlock applications otherwise hindered by business, legal, or ethical constraints.
We design and implement new privacy-preserving machine learning protocols for logistic regression and neural network models.
Our evaluations show that our logistic regression protocol is up to 9x faster, and the neural network training is up to 688x faster than SecureML.
arXiv Detail & Related papers (2024-03-26T00:51:12Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Coordinate Descent for Privacy-Preserving Multiparty Linear
Regression [0.5049057348282932]
We present Federated Coordinate Descent, a new distributed scheme called FCD, to address this issue securely under multiparty scenarios.
Specifically, through secure aggregation and added perturbations, our scheme guarantees that: (1) no local information is leaked to other parties, and (2) global model parameters are not exposed to cloud servers.
We show that the FCD scheme fills the gap of multiparty secure Coordinate Descent methods and is applicable for general linear regressions, including linear, ridge and lasso regressions.
arXiv Detail & Related papers (2022-09-16T03:53:46Z) - The Power and Limitation of Pretraining-Finetuning for Linear Regression
under Covariate Shift [127.21287240963859]
We investigate a transfer learning approach with pretraining on the source data and finetuning based on the target data.
For a large class of linear regression instances, transfer learning with $O(N2)$ source data is as effective as supervised learning with $N$ target data.
arXiv Detail & Related papers (2022-08-03T05:59:49Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Privacy-preserving Logistic Regression with Secret Sharing [0.0]
We propose secret sharing-based privacy-preserving logistic regression protocols using the Newton-Raphson method.
Our implementation results show that our improved method can handle large datasets used in securely training a logistic regression from multiple sources.
arXiv Detail & Related papers (2021-05-14T14:53:50Z) - High Performance Logistic Regression for Privacy-Preserving Genome
Analysis [15.078027648304117]
We present a secure logistic regression training protocol and its implementation, with a new subprotocol to securely compute the activation function.
We present the fastest existing secure Multi-Party Computation implementation for training logistic regression models on high dimensional genome data distributed across a local area network.
arXiv Detail & Related papers (2020-02-13T07:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.