SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning
- URL: http://arxiv.org/abs/2005.10296v3
- Date: Wed, 17 Feb 2021 08:47:28 GMT
- Title: SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning
- Authors: Nishat Koti, Mahak Pancholi, Arpita Patra, Ajith Suresh
- Abstract summary: We propose SWIFT, a robust framework for a range of ML algorithms in SOC setting.
SWIFT guarantees output delivery to the users irrespective of any adversarial behaviour.
We demonstrate our framework's practical relevance by benchmarking popular ML algorithms.
- Score: 16.17280000789628
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Performing machine learning (ML) computation on private data while
maintaining data privacy, aka Privacy-preserving Machine Learning~(PPML), is an
emergent field of research. Recently, PPML has seen a visible shift towards the
adoption of the Secure Outsourced Computation~(SOC) paradigm due to the heavy
computation that it entails. In the SOC paradigm, computation is outsourced to
a set of powerful and specially equipped servers that provide service on a
pay-per-use basis. In this work, we propose SWIFT, a robust PPML framework for
a range of ML algorithms in SOC setting, that guarantees output delivery to the
users irrespective of any adversarial behaviour. Robustness, a highly desirable
feature, evokes user participation without the fear of denial of service.
At the heart of our framework lies a highly-efficient, maliciously-secure,
three-party computation (3PC) over rings that provides guaranteed output
delivery (GOD) in the honest-majority setting. To the best of our knowledge,
SWIFT is the first robust and efficient PPML framework in the 3PC setting.
SWIFT is as fast as (and is strictly better in some cases than) the best-known
3PC framework BLAZE (Patra et al. NDSS'20), which only achieves fairness. We
extend our 3PC framework for four parties (4PC). In this regime, SWIFT is as
fast as the best known fair 4PC framework Trident (Chaudhari et al. NDSS'20)
and twice faster than the best-known robust 4PC framework FLASH (Byali et al.
PETS'20).
We demonstrate our framework's practical relevance by benchmarking popular ML
algorithms such as Logistic Regression and deep Neural Networks such as VGG16
and LeNet, both over a 64-bit ring in a WAN setting. For deep NN, our results
testify to our claims that we provide improved security guarantee while
incurring no additional overhead for 3PC and obtaining 2x improvement for 4PC.
Related papers
- Bypass Back-propagation: Optimization-based Structural Pruning for Large Language Models via Policy Gradient [57.9629676017527]
We propose an optimization-based structural pruning on Large-Language Models.
We learn the pruning masks in a probabilistic space directly by optimizing the loss of the pruned model.
Our method operates for 2.7 hours with around 35GB memory for the 13B models on a single A100 GPU.
arXiv Detail & Related papers (2024-06-15T09:31:03Z) - QUIK: Towards End-to-End 4-Bit Inference on Generative Large Language
Models [57.04178959678024]
We show that the majority of inference computations for large generative models can be performed with both weights and activations being cast to 4 bits.
We achieve this via a hybrid quantization strategy called QUIK, which compresses most of the weights and activations to 4-bit.
We provide GPU kernels matching the QUIK format with highly-efficient layer-wise runtimes, which lead to practical end-to-end throughput improvements of up to 3.4x.
arXiv Detail & Related papers (2023-10-13T17:15:05Z) - Efficient Privacy-Preserving Machine Learning with Lightweight Trusted Hardware [20.21755520998494]
This paper proposes a new secure machine learning inference platform assisted by a small dedicated security processor.
We achieve significant performance improvements compared to state-of-the-art distributed Privacy-Preserving Machine Learning (PPML) protocols.
Our technique is not limited by the size of secure memory in a TEE and can support high-capacity modern neural networks like ResNet18 and Transformer.
arXiv Detail & Related papers (2022-10-18T20:06:06Z) - Lightweight and Progressively-Scalable Networks for Semantic
Segmentation [100.63114424262234]
Multi-scale learning frameworks have been regarded as a capable class of models to boost semantic segmentation.
In this paper, we thoroughly analyze the design of convolutional blocks and the ways of interactions across multiple scales.
We devise Lightweight and Progressively-Scalable Networks (LPS-Net) that novelly expands the network complexity in a greedy manner.
arXiv Detail & Related papers (2022-07-27T16:00:28Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - MPCLeague: Robust MPC Platform for Privacy-Preserving Machine Learning [5.203329540700177]
This thesis focuses on designing efficient MPC frameworks for 2, 3 and 4 parties, with at most one corruption and supports ring structures.
We propose two variants for each of our frameworks, with one variant aiming to minimise the execution time while the other focuses on the monetary cost.
arXiv Detail & Related papers (2021-12-26T09:25:32Z) - Tetrad: Actively Secure 4PC for Secure Training and Inference [14.318471874603212]
Tetrad is a mixed-protocol framework for privacy-preserving machine learning.
Fair multiplication protocol requires communicating only 5 ring elements improving over the state-of-the-art protocol of Trident.
Fair framework is tested with benchmarks for deep neural networks such as LeNet and VGG16.
arXiv Detail & Related papers (2021-06-05T09:34:43Z) - Adam in Private: Secure and Fast Training of Deep Neural Networks with
Adaptive Moment Estimation [6.342794803074475]
We propose a framework that allows efficient evaluation of full-fledged state-of-the-art machine learning algorithms.
This is in contrast to most prior works, which substitute ML algorithms with approximated "MPC-friendly" variants.
We obtain secure training that outperforms state-of-the-art three-party systems.
arXiv Detail & Related papers (2021-06-04T01:40:09Z) - Towards Differentially Private Text Representations [52.64048365919954]
We develop a new deep learning framework under an untrusted server setting.
For the randomization module, we propose a novel local differentially private (LDP) protocol to reduce the impact of privacy parameter $epsilon$ on accuracy.
Analysis and experiments show that our framework delivers comparable or even better performance than the non-private framework and existing LDP protocols.
arXiv Detail & Related papers (2020-06-25T04:42:18Z) - BLAZE: Blazing Fast Privacy-Preserving Machine Learning [18.8081326324821]
Privacy-preserving Machine Learning (PPML) is where privacy of the data is guaranteed.
ThisMotivated the area of Privacy-preserving Machine Learning (PPML) where privacy of the data is guaranteed.
In SOC setting, the computation is outsourced to a set of specialized and powerful cloud servers and the service is availed on a pay-per-use basis.
arXiv Detail & Related papers (2020-05-18T19:18:22Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.