PermLLM: Private Inference of Large Language Models within 3 Seconds under WAN
- URL: http://arxiv.org/abs/2405.18744v1
- Date: Wed, 29 May 2024 04:06:50 GMT
- Title: PermLLM: Private Inference of Large Language Models within 3 Seconds under WAN
- Authors: Fei Zheng, Chaochao Chen, Zhongxuan Han, Xiaolin Zheng,
- Abstract summary: ChatGPT marks the arrival of the large language model (LLM) era.
PermLLM achieves two-party private inference of the ChatGLM-6B model at the speed of around 3s/token.
- Score: 19.014325509263536
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The emergence of ChatGPT marks the arrival of the large language model (LLM) era. While LLMs demonstrate their power in a variety of fields, they also raise serious privacy concerns as the users' queries are sent to the model provider. On the other side, deploying the LLM on the user's device will also leak all the model data. Existing methods based on secure multiparty computation (MPC) managed to protect both the privacy of the model parameters and user queries. However, they require gigabytes of data transfer and several minutes to generate just one token, making them impractical for most real-world applications. To improve the efficiency of private LLM inference, we propose PermLLM, which accelerates the evaluation of non-linear functions using secure random permutation. Along with the optimized secret sharing protocols and homomorphic encryption, PermLLM achieves two-party private inference of the ChatGLM-6B model at the speed of around 3s/token, under a realistic network setting (10ms RTT and 1Gbps bandwidth), which is magnitudes faster than existing MPC solutions.
Related papers
- Beyond the Turn-Based Game: Enabling Real-Time Conversations with Duplex Models [66.24055500785657]
Traditional turn-based chat systems prevent users from verbally interacting with system while it is generating responses.
To overcome these limitations, we adapt existing LLMs to listen users while generating output and provide users with instant feedback.
We build a dataset consisting of alternating time slices of queries and responses as well as covering typical feedback types in instantaneous interactions.
arXiv Detail & Related papers (2024-06-22T03:20:10Z) - ConfusionPrompt: Practical Private Inference for Online Large Language Models [11.26620418652188]
Large language models (LLMs) are commonly deployed as online services, necessitating users to transmit informative prompts to cloud servers.
We present ConfusionPrompt, a novel private LLM inference framework designed to obfuscate the server by decomposing the prompt into sub-prompts.
We develop a $(lambda, mu, rho)$-privacy model to formulate the requirement for a privacy-preserving group of prompts.
arXiv Detail & Related papers (2023-12-30T01:26:42Z) - Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes [53.4856038354195]
Pre-trained large language models (LLMs) need fine-tuning to improve their responsiveness to natural language instructions.
FedKSeed employs zeroth-order optimization with a finite set of random seeds.
It significantly reduces transmission requirements between the server and clients to just a few random seeds.
arXiv Detail & Related papers (2023-12-11T13:03:21Z) - DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer [57.04801796205638]
Large Language Models (LLMs) have emerged as dominant tools for various tasks.
However, concerns surrounding data privacy present obstacles due to the tuned prompts' dependency on sensitive private information.
We present Differentially-Private Offsite Prompt Tuning (DP-OPT) to address this challenge.
arXiv Detail & Related papers (2023-11-27T02:01:10Z) - Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models [62.838689691468666]
We propose Federated Black-Box Prompt Tuning (Fed-BBPT) to optimally harness each local dataset.
Fed-BBPT capitalizes on a central server that aids local users in collaboratively training a prompt generator through regular aggregation.
Relative to extensive fine-tuning, Fed-BBPT proficiently sidesteps memory challenges tied to PTM storage and fine-tuning on local machines.
arXiv Detail & Related papers (2023-10-04T19:30:49Z) - FwdLLM: Efficient FedLLM using Forward Gradient [8.520892692833293]
This work introduces FwdLLM, an innovative FL protocol designed to enhance the FedLLM efficiency.
FwdLLM employs backpropagation (BP)-free training methods, requiring devices only to execute perturbed inferences''
arXiv Detail & Related papers (2023-08-26T14:36:30Z) - LLM-Pruner: On the Structural Pruning of Large Language Models [65.02607075556742]
Large language models (LLMs) have shown remarkable capabilities in language understanding and generation.
We tackle the compression of LLMs within the bound of two constraints: being task-agnostic and minimizing the reliance on the original training dataset.
Our method, named LLM-Pruner, adopts structural pruning that selectively removes non-critical coupled structures.
arXiv Detail & Related papers (2023-05-19T12:10:53Z) - Private, Efficient, and Accurate: Protecting Models Trained by
Multi-party Learning with Differential Privacy [8.8480262507008]
We propose PEA (Private, Efficient, Accurate), which consists of a secure DPSGD protocol and two optimization methods.
We implement PEA in two open-source MPL frameworks: TF-Encrypted and Queqiao.
Experiments show that PEA can train a differentially private classification model with an accuracy of 88% for CIFAR-10 within 7 minutes under the LAN setting.
arXiv Detail & Related papers (2022-08-18T06:48:25Z) - Towards Differentially Private Text Representations [52.64048365919954]
We develop a new deep learning framework under an untrusted server setting.
For the randomization module, we propose a novel local differentially private (LDP) protocol to reduce the impact of privacy parameter $epsilon$ on accuracy.
Analysis and experiments show that our framework delivers comparable or even better performance than the non-private framework and existing LDP protocols.
arXiv Detail & Related papers (2020-06-25T04:42:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.