ZooPFL: Exploring Black-box Foundation Models for Personalized Federated
Learning
- URL: http://arxiv.org/abs/2310.05143v1
- Date: Sun, 8 Oct 2023 12:26:13 GMT
- Title: ZooPFL: Exploring Black-box Foundation Models for Personalized Federated
Learning
- Authors: Wang Lu, Hao Yu, Jindong Wang, Damien Teney, Haohan Wang, Yiqiang
Chen, Qiang Yang, Xing Xie, Xiangyang Ji
- Abstract summary: This paper endeavors to solve both the challenges of limited resources and personalization.
We propose a method named ZOOPFL that uses Zeroth-Order Optimization for Personalized Federated Learning.
To reduce the computation costs and enhance personalization, we propose input surgery to incorporate an auto-encoder with low-dimensional and client-specific embeddings.
- Score: 95.64041188351393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When personalized federated learning (FL) meets large foundation models, new
challenges arise from various limitations in resources. In addition to typical
limitations such as data, computation, and communication costs, access to the
models is also often limited. This paper endeavors to solve both the challenges
of limited resources and personalization. i.e., distribution shifts between
clients. To do so, we propose a method named ZOOPFL that uses Zeroth-Order
Optimization for Personalized Federated Learning. ZOOPFL avoids direct
interference with the foundation models and instead learns to adapt its inputs
through zeroth-order optimization. In addition, we employ simple yet effective
linear projections to remap its predictions for personalization. To reduce the
computation costs and enhance personalization, we propose input surgery to
incorporate an auto-encoder with low-dimensional and client-specific
embeddings. We provide theoretical support for ZOOPFL to analyze its
convergence. Extensive empirical experiments on computer vision and natural
language processing tasks using popular foundation models demonstrate its
effectiveness for FL on black-box foundation models.
Related papers
- Personalized Federated Learning with Adaptive Feature Aggregation and Knowledge Transfer [0.0]
Federated Learning (FL) is popular as a privacy-preserving machine learning paradigm for generating a single model on decentralized data.
We propose a new method personalized Federated learning with Adaptive Feature Aggregation and Knowledge Transfer (FedAFK)
We conduct extensive experiments on three datasets in two widely-used heterogeneous settings and show the superior performance of our proposed method over thirteen state-of-the-art baselines.
arXiv Detail & Related papers (2024-10-19T11:32:39Z) - Weak-to-Strong Extrapolation Expedites Alignment [135.12769233630362]
We propose a method called ExPO to boost models' alignment with human preference.
We demonstrate that ExPO consistently improves off-the-shelf DPO/RLHF models.
We shed light on the essence of ExPO amplifying the reward signal learned during alignment training.
arXiv Detail & Related papers (2024-04-25T17:39:50Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Profit: Benchmarking Personalization and Robustness Trade-off in
Federated Prompt Tuning [40.16581292336117]
In many applications of federated learning (FL), clients desire models that are personalized using their local data, yet are also robust in the sense that they retain general global knowledge.
It is critical to understand how to navigate this personalization vs robustness trade-off when designing federated systems.
arXiv Detail & Related papers (2023-10-06T23:46:33Z) - Federated Short-Term Load Forecasting with Personalization Layers for
Heterogeneous Clients [0.7252027234425334]
We propose a personalized FL algorithm (PL-FL) enabling FL to handle personalization layers.
PL-FL is implemented by using the Argonne Privacy-Preserving Federated Learning package.
We test the forecast performance of models trained on the NREL ComStock dataset.
arXiv Detail & Related papers (2023-09-22T21:57:52Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Towards Efficient Task-Driven Model Reprogramming with Foundation Models [52.411508216448716]
Vision foundation models exhibit impressive power, benefiting from the extremely large model capacity and broad training data.
However, in practice, downstream scenarios may only support a small model due to the limited computational resources or efficiency considerations.
This brings a critical challenge for the real-world application of foundation models: one has to transfer the knowledge of a foundation model to the downstream task.
arXiv Detail & Related papers (2023-04-05T07:28:33Z) - Tensor Decomposition based Personalized Federated Learning [12.420951968273574]
Federated learning (FL) is a new distributed machine learning framework that can achieve reliably collaborative training without collecting users' private data.
Due to FL's frequent communication and average aggregation strategy, they experience challenges scaling to statistical diversity data and large-scale models.
We propose a personalized FL framework, named Decomposition based Personalized learning (TDPFed), in which we design a novel tensorized local model with tensorized linear layers and convolutional layers to reduce the communication cost.
arXiv Detail & Related papers (2022-08-27T08:09:14Z) - QuPeD: Quantized Personalization via Distillation with Applications to
Federated Learning [8.420943739336067]
federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server.
We introduce a textitquantized and textitpersonalized FL algorithm QuPeD that facilitates collective (personalized model compression) training.
Numerically, we validate that QuPeD outperforms competing personalized FL methods, FedAvg, and local training of clients in various heterogeneous settings.
arXiv Detail & Related papers (2021-07-29T10:55:45Z) - UVeQFed: Universal Vector Quantization for Federated Learning [179.06583469293386]
Federated learning (FL) is an emerging approach to train such learning models without requiring the users to share their possibly private labeled data.
In FL, each user trains its copy of the learning model locally. The server then collects the individual updates and aggregates them into a global model.
We show that combining universal vector quantization methods with FL yields a decentralized training system in which the compression of the trained models induces only a minimum distortion.
arXiv Detail & Related papers (2020-06-05T07:10:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.