Secure Transformer Inference Protocol
- URL: http://arxiv.org/abs/2312.00025v2
- Date: Wed, 8 May 2024 02:20:09 GMT
- Title: Secure Transformer Inference Protocol
- Authors: Mu Yuan, Lan Zhang, Xiang-Yang Li,
- Abstract summary: Security of model parameters and user data is critical for Transformer-based services, such as ChatGPT.
Recent strides in secure two-party protocols have successfully addressed security concerns in serving Transformer models, but their adoption is practically infeasible due to the prohibitive cryptographic overheads involved.
We present STIP, the first secure Transformer inference protocol without any inference accuracy loss.
- Score: 15.610303095235372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Security of model parameters and user data is critical for Transformer-based services, such as ChatGPT. While recent strides in secure two-party protocols have successfully addressed security concerns in serving Transformer models, their adoption is practically infeasible due to the prohibitive cryptographic overheads involved. Drawing insights from our hands-on experience in developing two real-world Transformer-based services, we identify the inherent efficiency bottleneck in the two-party assumption. To overcome this limitation, we propose a novel three-party threat model. Within this framework, we design a semi-symmetric permutation-based protection scheme and present STIP, the first secure Transformer inference protocol without any inference accuracy loss. Experiments on representative Transformer models in real systems show that STIP has practical security and outperforms state-of-the-art secure two-party protocols in efficiency by millions of times.
Related papers
- Securing Legacy Communication Networks via Authenticated Cyclic Redundancy Integrity Check [98.34702864029796]
We propose Authenticated Cyclic Redundancy Integrity Check (ACRIC)
ACRIC preserves backward compatibility without requiring additional hardware and is protocol agnostic.
We show that ACRIC offers robust security with minimal transmission overhead ( 1 ms)
arXiv Detail & Related papers (2024-11-21T18:26:05Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - LogShield: A Transformer-based APT Detection System Leveraging
Self-Attention [2.1256044139613772]
This paper proposes LogShield, a framework designed to detect APT attack patterns leveraging the power of self-attention in transformers.
We incorporate customized embedding layers to effectively capture the context of event sequences derived from provenance graphs.
Our framework achieved superior F1 scores of 98% and 95% on the two datasets respectively, surpassing the F1 scores of 96% and 94% obtained by LSTM models.
arXiv Detail & Related papers (2023-11-09T20:43:15Z) - The Efficacy of Transformer-based Adversarial Attacks in Security
Domains [0.7156877824959499]
We evaluate the robustness of transformers to adversarial samples for system defenders and their adversarial strength for system attackers.
Our work emphasizes the importance of studying transformer architectures for attacking and defending models in security domains.
arXiv Detail & Related papers (2023-10-17T21:45:23Z) - Exploring the Benefits of Differentially Private Pre-training and
Parameter-Efficient Fine-tuning for Table Transformers [56.00476706550681]
Table Transformer (TabTransformer) is a state-of-the-art neural network model, while Differential Privacy (DP) is an essential component to ensure data privacy.
In this paper, we explore the benefits of combining these two aspects together in the scenario of transfer learning.
arXiv Detail & Related papers (2023-09-12T19:08:26Z) - East: Efficient and Accurate Secure Transformer Framework for Inference [7.887332345182056]
We propose a framework emphEast to enable efficient and accurate secure Transformer inference.
Compared to Iron, we achieve about 1.8$times$ lower communication within 1.2$times$ lower runtime.
arXiv Detail & Related papers (2023-08-19T06:26:14Z) - ScionFL: Efficient and Robust Secure Quantized Aggregation [36.668162197302365]
We introduce ScionFL, the first secure aggregation framework for federated learning.
It operates efficiently on quantized inputs and simultaneously provides robustness against malicious clients.
We show that with no overhead for clients and moderate overhead for the server, we obtain comparable accuracy for standard FL benchmarks.
arXiv Detail & Related papers (2022-10-13T21:46:55Z) - Safe Self-Refinement for Transformer-based Domain Adaptation [73.8480218879]
Unsupervised Domain Adaptation (UDA) aims to leverage a label-rich source domain to solve tasks on a related unlabeled target domain.
It is a challenging problem especially when a large domain gap lies between the source and target domains.
We propose a novel solution named SSRT (Safe Self-Refinement for Transformer-based domain adaptation), which brings improvement from two aspects.
arXiv Detail & Related papers (2022-04-16T00:15:46Z) - Diverse Part Discovery: Occluded Person Re-identification with
Part-Aware Transformer [95.02123369512384]
Occluded person re-identification (Re-ID) is a challenging task as persons are frequently occluded by various obstacles or other persons.
We propose a novel end-to-end Part-Aware Transformer (PAT) for occluded person Re-ID through diverse part discovery.
arXiv Detail & Related papers (2021-06-08T04:29:07Z) - TSS: Transformation-Specific Smoothing for Robustness Certification [37.87602431929278]
Motivated adversaries can mislead machine learning systems by perturbing test data using semantic transformations.
We provide TSS -- a unified framework for certifying ML robustness against general adversarial semantic transformations.
We show TSS is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2020-02-27T19:19:32Z) - Robustness Verification for Transformers [165.25112192811764]
We develop the first robustness verification algorithm for Transformers.
The certified robustness bounds computed by our method are significantly tighter than those by naive Interval Bound propagation.
These bounds also shed light on interpreting Transformers as they consistently reflect the importance of different words in sentiment analysis.
arXiv Detail & Related papers (2020-02-16T17:16:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.