Tetrad: Actively Secure 4PC for Secure Training and Inference
- URL: http://arxiv.org/abs/2106.02850v2
- Date: Tue, 8 Jun 2021 07:48:39 GMT
- Title: Tetrad: Actively Secure 4PC for Secure Training and Inference
- Authors: Nishat Koti, Arpita Patra, Rahul Rachuri, Ajith Suresh
- Abstract summary: Tetrad is a mixed-protocol framework for privacy-preserving machine learning.
Fair multiplication protocol requires communicating only 5 ring elements improving over the state-of-the-art protocol of Trident.
Fair framework is tested with benchmarks for deep neural networks such as LeNet and VGG16.
- Score: 14.318471874603212
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we design an efficient mixed-protocol framework, Tetrad, with
applications to privacy-preserving machine learning. It is designed for the
four-party setting with at most one active corruption and supports rings.
Our fair multiplication protocol requires communicating only 5 ring elements
improving over the state-of-the-art protocol of Trident (Chaudhari et al.
NDSS'20). The technical highlights of Tetrad include efficient (a) truncation
without any overhead, (b) multi-input multiplication protocols for arithmetic
and boolean worlds, (c) garbled-world, tailor-made for the mixed-protocol
framework, and (d) conversion mechanisms to switch between the computation
styles. The fair framework is also extended to provide robustness without
inflating the costs.
The competence of Tetrad is tested with benchmarks for deep neural networks
such as LeNet and VGG16 and support vector machines. One variant of our
framework aims at minimizing the execution time, while the other focuses on the
monetary cost. We observe improvements up to 6x over Trident across these
parameters.
Related papers
- COMET: Towards Partical W4A4KV4 LLMs Serving [37.30529940231099]
Quantization is a compression technology to reduce the overhead of serving large language models (LLMs) on terminal devices and in cloud data centers.
We propose a novel mixed-precision quantization algorithm (FMPQ) that compresses most activations into 4-bit with negligible accuracy loss.
We integrate the optimized W4Ax kernel into our inference framework, COMET, and provide efficient management to support popular LLMs.
arXiv Detail & Related papers (2024-10-16T02:16:53Z) - 4D ASR: Joint Beam Search Integrating CTC, Attention, Transducer, and Mask Predict Decoders [53.297697898510194]
We propose a joint modeling scheme where four decoders share the same encoder -- we refer to this as 4D modeling.
To efficiently train the 4D model, we introduce a two-stage training strategy that stabilizes multitask learning.
In addition, we propose three novel one-pass beam search algorithms by combining three decoders.
arXiv Detail & Related papers (2024-06-05T05:18:20Z) - HEQuant: Marrying Homomorphic Encryption and Quantization for
Communication-Efficient Private Inference [2.498379184732383]
We propose HEQuant, which features low-precision-quantization-aware optimization for the HE-based protocols.
Compared with prior-art HE-based protocols, e.g., CrypTFlow2, Cheetah, Iron, etc, HEQuant achieves $3.5sim 23.4times$ communication reduction.
arXiv Detail & Related papers (2024-01-29T08:59:05Z) - QUIK: Towards End-to-End 4-Bit Inference on Generative Large Language
Models [57.04178959678024]
We show that the majority of inference computations for large generative models can be performed with both weights and activations being cast to 4 bits.
We achieve this via a hybrid quantization strategy called QUIK, which compresses most of the weights and activations to 4-bit.
We provide GPU kernels matching the QUIK format with highly-efficient layer-wise runtimes, which lead to practical end-to-end throughput improvements of up to 3.4x.
arXiv Detail & Related papers (2023-10-13T17:15:05Z) - UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation [93.88170217725805]
We propose a 3D medical image segmentation approach, named UNETR++, that offers both high-quality segmentation masks as well as efficiency in terms of parameters, compute cost, and inference speed.
The core of our design is the introduction of a novel efficient paired attention (EPA) block that efficiently learns spatial and channel-wise discriminative features.
Our evaluations on five benchmarks, Synapse, BTCV, ACDC, BRaTs, and Decathlon-Lung, reveal the effectiveness of our contributions in terms of both efficiency and accuracy.
arXiv Detail & Related papers (2022-12-08T18:59:57Z) - High-Throughput Secure Multiparty Computation with an Honest Majority in Various Network Settings [16.242352823823218]
We present novel protocols over rings for secure three-party computation (3PC) and malicious four-party computation (4PC) with one corruption.
Our protocols tolerate multiple arbitrarily weak network links between parties without any substantial decrease in performance.
They significantly reduce computational complexity by requiring up to half the number of basic instructions per gate compared to related work.
arXiv Detail & Related papers (2022-06-08T09:46:37Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - HAWQV3: Dyadic Neural Network Quantization [73.11579145354801]
Current low-precision quantization algorithms often have the hidden cost of conversion back and forth from floating point to quantized integer values.
We present HAWQV3, a novel mixed-precision integer-only quantization framework.
arXiv Detail & Related papers (2020-11-20T23:51:43Z) - SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning [16.17280000789628]
We propose SWIFT, a robust framework for a range of ML algorithms in SOC setting.
SWIFT guarantees output delivery to the users irrespective of any adversarial behaviour.
We demonstrate our framework's practical relevance by benchmarking popular ML algorithms.
arXiv Detail & Related papers (2020-05-20T18:20:23Z) - XSepConv: Extremely Separated Convolution [60.90871656244126]
We propose a novel extremely separated convolutional block (XSepConv)
It fuses spatially separable convolutions into depthwise convolution to reduce both the computational cost and parameter size of large kernels.
XSepConv is designed to be an efficient alternative to vanilla depthwise convolution with large kernel sizes.
arXiv Detail & Related papers (2020-02-27T11:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.