FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language
Models
- URL: http://arxiv.org/abs/2310.01467v1
- Date: Mon, 2 Oct 2023 16:43:14 GMT
- Title: FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language
Models
- Authors: Jingwei Sun, Ziyue Xu, Hongxu Yin, Dong Yang, Daguang Xu, Yiran Chen,
Holger R. Roth
- Abstract summary: Pre-trained language models (PLM) have revolutionized the NLP landscape, achieving stellar performances across diverse tasks.
This paper introduces Federated Black-box Prompt Tuning (FedBPT), a framework designed to address these challenges.
- Score: 22.29061931122386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-trained language models (PLM) have revolutionized the NLP landscape,
achieving stellar performances across diverse tasks. These models, while
benefiting from vast training data, often require fine-tuning on specific data
to cater to distinct downstream tasks. However, this data adaptation process
has inherent security and privacy concerns, primarily when leveraging
user-generated, device-residing data. Federated learning (FL) provides a
solution, allowing collaborative model fine-tuning without centralized data
collection. However, applying FL to finetune PLMs is hampered by challenges,
including restricted model parameter access, high computational requirements,
and communication overheads. This paper introduces Federated Black-box Prompt
Tuning (FedBPT), a framework designed to address these challenges. FedBPT does
not require the clients to access the model parameters. By focusing on training
optimal prompts and utilizing gradient-free optimization methods, FedBPT
reduces the number of exchanged variables, boosts communication efficiency, and
minimizes computational and storage costs. Experiments highlight the
framework's ability to drastically cut communication and memory costs while
maintaining competitive performance. Ultimately, FedBPT presents a promising
solution for efficient, privacy-preserving fine-tuning of PLM in the age of
large language models.
Related papers
- FedDTPT: Federated Discrete and Transferable Prompt Tuning for Black-Box Large Language Models [14.719919025265224]
Fine-tuning large language models (LLMs) with data from specific scenarios poses privacy leakage risks.
We propose for the first time a federated discrete and transferable prompt tuning, namely FedDTPT, for black-box large language models.
Our approach achieves higher accuracy, reduced communication overhead, and robustness to non-iid data in a black-box setting.
arXiv Detail & Related papers (2024-11-01T19:19:23Z) - Communication-Efficient and Tensorized Federated Fine-Tuning of Large Language Models [24.07770417615704]
We introduce FedTT and FedTT+, methods for adapting Large Language Models.
FedTT is versatile and can be applied to both cross-silo FL and large-scale cross-device FL.
Our proposed methods successfully address data heterogeneity challenges and perform on par or even better than existing federated PEFT approaches.
arXiv Detail & Related papers (2024-10-16T23:50:39Z) - FedPT: Federated Proxy-Tuning of Large Language Models on Resource-Constrained Edge Devices [10.01451891927236]
textbfFederated textbfProxy-textbfTuning (FedPT) is a novel framework for federated fine-tuning of black-box large LMs.
FedPT can significantly reduce computation, communication, and memory overhead while maintaining competitive performance.
arXiv Detail & Related papers (2024-10-01T03:20:39Z) - Save It All: Enabling Full Parameter Tuning for Federated Large Language Models via Cycle Block Gradient Descent [15.463595798992621]
Large language models (LLMs) have revolutionized the deep learning paradigm, yielding impressive results across a wide array of tasks.
Existing solutions make the unrealistic assumption that the entire model is exchanged for training.
We introduce a novel method for the efficient training and fine-tuning of LLMs in FL, with minimal resource consumption.
arXiv Detail & Related papers (2024-06-17T03:49:44Z) - Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes [53.4856038354195]
Pre-trained large language models (LLMs) need fine-tuning to improve their responsiveness to natural language instructions.
FedKSeed employs zeroth-order optimization with a finite set of random seeds.
It significantly reduces transmission requirements between the server and clients to just a few random seeds.
arXiv Detail & Related papers (2023-12-11T13:03:21Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Federated Learning of Large Language Models with Parameter-Efficient
Prompt Tuning and Adaptive Optimization [71.87335804334616]
Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data.
The training process of Large Language Models (LLMs) generally incurs the update of significant parameters.
This paper proposes an efficient partial prompt tuning approach to improve performance and efficiency simultaneously.
arXiv Detail & Related papers (2023-10-23T16:37:59Z) - ZooPFL: Exploring Black-box Foundation Models for Personalized Federated
Learning [95.64041188351393]
This paper endeavors to solve both the challenges of limited resources and personalization.
We propose a method named ZOOPFL that uses Zeroth-Order Optimization for Personalized Federated Learning.
To reduce the computation costs and enhance personalization, we propose input surgery to incorporate an auto-encoder with low-dimensional and client-specific embeddings.
arXiv Detail & Related papers (2023-10-08T12:26:13Z) - Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models [62.838689691468666]
We propose Federated Black-Box Prompt Tuning (Fed-BBPT) to optimally harness each local dataset.
Fed-BBPT capitalizes on a central server that aids local users in collaboratively training a prompt generator through regular aggregation.
Relative to extensive fine-tuning, Fed-BBPT proficiently sidesteps memory challenges tied to PTM storage and fine-tuning on local machines.
arXiv Detail & Related papers (2023-10-04T19:30:49Z) - SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models [28.764782216513037]
Federated Learning (FL) can benefit from distributed and private data of the FL edge clients for fine-tuning.
We propose a method called SLoRA, which overcomes the key limitations of LoRA in high heterogeneous data scenarios.
Our experimental results demonstrate that SLoRA achieves performance comparable to full fine-tuning.
arXiv Detail & Related papers (2023-08-12T10:33:57Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.