THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption
- URL: http://arxiv.org/abs/2206.00216v2
- Date: Thu, 2 Jun 2022 01:06:12 GMT
- Title: THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption
- Authors: Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin
Jiang, Haoyi Zhou, Jianxin Li, Furu Wei
- Abstract summary: Privacy-preserving inference of transformer models is on the demand of cloud service users.
We introduce $textitTHE-X$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models.
- Score: 112.02441503951297
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As more and more pre-trained language models adopt on-cloud deployment, the
privacy issues grow quickly, mainly for the exposure of plain-text user data
(e.g., search history, medical record, bank account). Privacy-preserving
inference of transformer models is on the demand of cloud service users. To
protect privacy, it is an attractive choice to compute only with ciphertext in
homomorphic encryption (HE). However, enabling pre-trained models inference on
ciphertext data is difficult due to the complex computations in transformer
blocks, which are not supported by current HE tools yet. In this work, we
introduce $\textit{THE-X}$, an approximation approach for transformers, which
enables privacy-preserving inference of pre-trained models developed by popular
frameworks. $\textit{THE-X}$ proposes a workflow to deal with complex
computation in transformer networks, including all the non-polynomial functions
like GELU, softmax, and LayerNorm. Experiments reveal our proposed
$\textit{THE-X}$ can enable transformer inference on encrypted data for
different downstream tasks, all with negligible performance drop but enjoying
the theory-guaranteed privacy-preserving advantage.
Related papers
- Encryption-Friendly LLM Architecture [11.386436468650016]
Homomorphic encryption (HE) is a cryptographic protocol supporting arithmetic computations in encrypted states.
We propose a modified HE-friendly transformer architecture with an emphasis on inference following personalized (private) fine-tuning.
arXiv Detail & Related papers (2024-10-03T13:48:35Z) - CipherFormer: Efficient Transformer Private Inference with Low Round Complexity [0.0]
We propose CipherFormer, a novel transformer private inference scheme using homomorphic encryption and garbled circuits.
In comparison with an advanced homomorphic encryption scheme on text classification tasks, our model improves accuracy by 3% to 11% while performing private inference with a 7.7x-11.9x speedup.
arXiv Detail & Related papers (2024-03-25T15:24:57Z) - I can't see it but I can Fine-tune it: On Encrypted Fine-tuning of
Transformers using Fully Homomorphic Encryption [5.12893315783096]
We introduce BlindTuner, a privacy-preserving fine-tuning system that enables transformer training exclusively on homomorphically encrypted data for image classification.
Our findings highlight a substantial speed enhancement of 1.5x to 600x over previous work in this domain.
arXiv Detail & Related papers (2024-02-14T10:15:43Z) - InferDPT: Privacy-Preserving Inference for Black-box Large Language Model [66.07752875835506]
InferDPT is the first practical framework for the privacy-preserving Inference of black-box LLMs.
RANTEXT is a novel differential privacy mechanism integrated into the perturbation module of InferDPT.
arXiv Detail & Related papers (2023-10-18T18:00:11Z) - Learning in the Dark: Privacy-Preserving Machine Learning using Function Approximation [1.8907108368038215]
Learning in the Dark is a privacy-preserving machine learning model that can classify encrypted images with high accuracy.
It is capable of performing high accuracy predictions by performing computations directly on encrypted data.
arXiv Detail & Related papers (2023-09-15T06:45:58Z) - When approximate design for fast homomorphic computation provides
differential privacy guarantees [0.08399688944263842]
Differential privacy (DP) and cryptographic primitives are popular countermeasures against privacy attacks.
In this paper, we design SHIELD, a probabilistic approximation algorithm for the argmax operator.
Even if SHIELD could have other applications, we here focus on one setting and seamlessly integrate it in the SPEED collaborative training framework.
arXiv Detail & Related papers (2023-04-06T09:38:01Z) - Federated Nearest Neighbor Machine Translation [66.8765098651988]
In this paper, we propose a novel federated nearest neighbor (FedNN) machine translation framework.
FedNN leverages one-round memorization-based interaction to share knowledge across different clients.
Experiments show that FedNN significantly reduces computational and communication costs compared with FedAvg.
arXiv Detail & Related papers (2023-02-23T18:04:07Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Transformers Solve the Limited Receptive Field for Monocular Depth
Prediction [82.90445525977904]
We propose TransDepth, an architecture which benefits from both convolutional neural networks and transformers.
This is the first paper which applies transformers into pixel-wise prediction problems involving continuous labels.
arXiv Detail & Related papers (2021-03-22T18:00:13Z) - Faster Secure Data Mining via Distributed Homomorphic Encryption [108.77460689459247]
Homomorphic Encryption (HE) is receiving more and more attention recently for its capability to do computations over the encrypted field.
We propose a novel general distributed HE-based data mining framework towards one step of solving the scaling problem.
We verify the efficiency and effectiveness of our new framework by testing over various data mining algorithms and benchmark data-sets.
arXiv Detail & Related papers (2020-06-17T18:14:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.