BLAZE: Blazing Fast Privacy-Preserving Machine Learning
- URL: http://arxiv.org/abs/2005.09042v1
- Date: Mon, 18 May 2020 19:18:22 GMT
- Title: BLAZE: Blazing Fast Privacy-Preserving Machine Learning
- Authors: Arpita Patra and Ajith Suresh
- Abstract summary: Privacy-preserving Machine Learning (PPML) is where privacy of the data is guaranteed.
ThisMotivated the area of Privacy-preserving Machine Learning (PPML) where privacy of the data is guaranteed.
In SOC setting, the computation is outsourced to a set of specialized and powerful cloud servers and the service is availed on a pay-per-use basis.
- Score: 18.8081326324821
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning tools have illustrated their potential in many significant
sectors such as healthcare and finance, to aide in deriving useful inferences.
The sensitive and confidential nature of the data, in such sectors, raise
natural concerns for the privacy of data. This motivated the area of
Privacy-preserving Machine Learning (PPML) where privacy of the data is
guaranteed. Typically, ML techniques require large computing power, which leads
clients with limited infrastructure to rely on the method of Secure Outsourced
Computation (SOC). In SOC setting, the computation is outsourced to a set of
specialized and powerful cloud servers and the service is availed on a
pay-per-use basis. In this work, we explore PPML techniques in the SOC setting
for widely used ML algorithms-- Linear Regression, Logistic Regression, and
Neural Networks.
We propose BLAZE, a blazing fast PPML framework in the three server setting
tolerating one malicious corruption over a ring (\Z{\ell}). BLAZE achieves the
stronger security guarantee of fairness (all honest servers get the output
whenever the corrupt server obtains the same). Leveraging an input-independent
preprocessing phase, BLAZE has a fast input-dependent online phase relying on
efficient PPML primitives such as: (i) A dot product protocol for which the
communication in the online phase is independent of the vector size, the first
of its kind in the three server setting; (ii) A method for truncation that
shuns evaluating expensive circuit for Ripple Carry Adders (RCA) and achieves a
constant round complexity. This improves over the truncation method of ABY3
(Mohassel et al., CCS 2018) that uses RCA and consumes a round complexity that
is of the order of the depth of RCA.
An extensive benchmarking of BLAZE for the aforementioned ML algorithms over
a 64-bit ring in both WAN and LAN settings shows massive improvements over
ABY3.
Related papers
- The Communication-Friendly Privacy-Preserving Machine Learning against Malicious Adversaries [14.232901861974819]
Privacy-preserving machine learning (PPML) is an innovative approach that allows for secure data analysis while safeguarding sensitive information.
We introduce efficient protocol for secure linear function evaluation.
We extend the protocol to handle linear and non-linear layers, ensuring compatibility with a wide range of machine-learning models.
arXiv Detail & Related papers (2024-11-14T08:55:14Z) - CURE: Privacy-Preserving Split Learning Done Right [1.388112207221632]
Homomorphic encryption (HE)-based solutions exist for this scenario but often impose prohibitive computational burdens.
CURE is a novel system that encrypts only the server side of the model and the data.
We demonstrate CURE can achieve similar accuracy to plaintext SL while being 16x more efficient in terms of the runtime.
arXiv Detail & Related papers (2024-07-12T04:10:19Z) - Towards Compact 3D Representations via Point Feature Enhancement Masked
Autoencoders [52.66195794216989]
We propose Point Feature Enhancement Masked Autoencoders (Point-FEMAE) to learn compact 3D representations.
Point-FEMAE consists of a global branch and a local branch to capture latent semantic features.
Our method significantly improves the pre-training efficiency compared to cross-modal alternatives.
arXiv Detail & Related papers (2023-12-17T14:17:05Z) - ezDPS: An Efficient and Zero-Knowledge Machine Learning Inference
Pipeline [2.0813318162800707]
We propose ezDPS, a new efficient and zero-knowledge Machine Learning inference scheme.
ezDPS is a zkML pipeline in which the data is processed in multiple stages for high accuracy.
We show that ezDPS achieves one-to-three orders of magnitude more efficient than the generic circuit-based approach in all metrics.
arXiv Detail & Related papers (2022-12-11T06:47:28Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Adam in Private: Secure and Fast Training of Deep Neural Networks with
Adaptive Moment Estimation [6.342794803074475]
We propose a framework that allows efficient evaluation of full-fledged state-of-the-art machine learning algorithms.
This is in contrast to most prior works, which substitute ML algorithms with approximated "MPC-friendly" variants.
We obtain secure training that outperforms state-of-the-art three-party systems.
arXiv Detail & Related papers (2021-06-04T01:40:09Z) - Lossless Compression of Efficient Private Local Randomizers [55.657133416044104]
Locally Differentially Private (LDP) Reports are commonly used for collection of statistics and machine learning in the federated setting.
In many cases the best known LDP algorithms require sending prohibitively large messages from the client device to the server.
This has led to significant efforts on reducing the communication cost of LDP algorithms.
arXiv Detail & Related papers (2021-02-24T07:04:30Z) - Faster Secure Data Mining via Distributed Homomorphic Encryption [108.77460689459247]
Homomorphic Encryption (HE) is receiving more and more attention recently for its capability to do computations over the encrypted field.
We propose a novel general distributed HE-based data mining framework towards one step of solving the scaling problem.
We verify the efficiency and effectiveness of our new framework by testing over various data mining algorithms and benchmark data-sets.
arXiv Detail & Related papers (2020-06-17T18:14:30Z) - SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning [16.17280000789628]
We propose SWIFT, a robust framework for a range of ML algorithms in SOC setting.
SWIFT guarantees output delivery to the users irrespective of any adversarial behaviour.
We demonstrate our framework's practical relevance by benchmarking popular ML algorithms.
arXiv Detail & Related papers (2020-05-20T18:20:23Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.