Towards Secure and Practical Machine Learning via Secret Sharing and
Random Permutation
- URL: http://arxiv.org/abs/2108.07463v2
- Date: Wed, 18 Aug 2021 08:05:11 GMT
- Title: Towards Secure and Practical Machine Learning via Secret Sharing and
Random Permutation
- Authors: Fei Zheng, Chaochao Chen, Xiaolin Zheng
- Abstract summary: We build a privacy-preserving machine learning framework by combining random permutation and arithmetic secret sharing.
Our method is up to 6x faster and reduces up to 85% network traffic compared with state-of-the-art cryptographic methods.
- Score: 12.181314740980241
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increasing demands for privacy protection, privacy-preserving
machine learning has been drawing much attention in both academia and industry.
However, most existing methods have their limitations in practical
applications. On the one hand, although most cryptographic methods are provable
secure, they bring heavy computation and communication. On the other hand, the
security of many relatively efficient private methods (e.g., federated learning
and split learning) is being questioned, since they are non-provable secure.
Inspired by previous work on privacy-preserving machine learning, we build a
privacy-preserving machine learning framework by combining random permutation
and arithmetic secret sharing via our compute-after-permutation technique.
Since our method reduces the cost for element-wise function computation, it is
more efficient than existing cryptographic methods. Moreover, by adopting
distance correlation as a metric for privacy leakage, we demonstrate that our
method is more secure than previous non-provable secure methods. Overall, our
proposal achieves a good balance between security and efficiency. Experimental
results show that our method not only is up to 6x faster and reduces up to 85%
network traffic compared with state-of-the-art cryptographic methods, but also
leaks less privacy during the training process compared with non-provable
secure methods.
Related papers
- HETAL: Efficient Privacy-preserving Transfer Learning with Homomorphic Encryption [4.164336621664897]
HETAL is an efficient Homomorphic Encryption based Transfer Learning algorithm.
We propose an encrypted matrix multiplication algorithm, which is 1.8 to 323 times faster than prior methods.
Experiments show total training times of 567-3442 seconds, which is less than an hour.
arXiv Detail & Related papers (2024-03-21T03:47:26Z) - ByzSecAgg: A Byzantine-Resistant Secure Aggregation Scheme for Federated
Learning Based on Coded Computing and Vector Commitment [90.60126724503662]
ByzSecAgg is an efficient secure aggregation scheme for federated learning.
ByzSecAgg is protected against Byzantine attacks and privacy leakages.
arXiv Detail & Related papers (2023-02-20T11:15:18Z) - Pre-trained Encoders in Self-Supervised Learning Improve Secure and
Privacy-preserving Supervised Learning [63.45532264721498]
Self-supervised learning is an emerging technique to pre-train encoders using unlabeled data.
We perform first systematic, principled measurement study to understand whether and when a pretrained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms.
arXiv Detail & Related papers (2022-12-06T21:35:35Z) - Perfectly Secure Steganography Using Minimum Entropy Coupling [60.154855689780796]
We show that a steganography procedure is perfectly secure under Cachin 1998's information-theoretic model of steganography.
We also show that, among perfectly secure procedures, a procedure maximizes information throughput if and only if it is induced by a minimum entropy coupling.
arXiv Detail & Related papers (2022-10-24T17:40:07Z) - A Pixel-based Encryption Method for Privacy-Preserving Deep Learning
Models [5.749044590090683]
We propose an efficient pixel-based perceptual encryption method.
The method provides a necessary level of security while preserving the intrinsic properties of the original image.
Thereby, can enable deep learning (DL) applications in the encryption domain.
arXiv Detail & Related papers (2022-03-31T03:42:11Z) - Privacy-preserving Decentralized Aggregation for Federated Learning [3.9323226496740733]
Federated learning is a promising framework for learning over decentralized data spanning multiple regions.
We develop a privacy-preserving decentralized aggregation protocol for federated learning.
We evaluate our algorithm on image classification and next-word prediction applications over benchmark datasets with 9 and 15 distributed sites.
arXiv Detail & Related papers (2020-12-13T23:45:42Z) - Efficient Sparse Secure Aggregation for Federated Learning [0.20052993723676896]
We adapt compression-based federated techniques to additive secret sharing, leading to an efficient secure aggregation protocol.
We prove its privacy against malicious adversaries and its correctness in the semi-honest setting.
Compared to prior works on secure aggregation, our protocol has a lower communication and adaptable costs for a similar accuracy.
arXiv Detail & Related papers (2020-07-29T14:28:30Z) - Secure Byzantine-Robust Machine Learning [61.03711813598128]
We propose a secure two-server protocol that offers both input privacy and Byzantine-robustness.
In addition, this protocol is communication-efficient, fault-tolerant and enjoys local differential privacy.
arXiv Detail & Related papers (2020-06-08T16:55:15Z) - User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization [77.43075255745389]
Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
arXiv Detail & Related papers (2020-02-29T10:13:39Z) - An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks
in Federated Learning [82.80836918594231]
Federated learning improves privacy of training data by exchanging local gradients or parameters rather than raw data.
adversary can leverage local gradients and parameters to obtain local training data by launching reconstruction and membership inference attacks.
To defend such privacy attacks, many noises perturbation methods have been widely designed.
arXiv Detail & Related papers (2020-02-23T06:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.