FedRA: A Random Allocation Strategy for Federated Tuning to Unleash the
Power of Heterogeneous Clients
- URL: http://arxiv.org/abs/2311.11227v2
- Date: Tue, 12 Mar 2024 05:26:45 GMT
- Title: FedRA: A Random Allocation Strategy for Federated Tuning to Unleash the
Power of Heterogeneous Clients
- Authors: Shangchao Su, Bin Li, Xiangyang Xue
- Abstract summary: In real-world federated scenarios, there often exist a multitude of heterogeneous clients with varying computation and communication resources.
We propose a novel federated tuning algorithm, FedRA.
In each communication round, FedRA randomly generates an allocation matrix.
It reorganizes a small number of layers from the original model based on the allocation matrix and fine-tunes using adapters.
- Score: 50.13097183691517
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing availability of Foundation Models, federated tuning has
garnered attention in the field of federated learning, utilizing data and
computation resources from multiple clients to collaboratively fine-tune
foundation models. However, in real-world federated scenarios, there often
exist a multitude of heterogeneous clients with varying computation and
communication resources, rendering them incapable of supporting the entire
model fine-tuning process. In response to this challenge, we propose a novel
federated tuning algorithm, FedRA. The implementation of FedRA is
straightforward and can be seamlessly integrated into any transformer-based
model without the need for further modification to the original model.
Specifically, in each communication round, FedRA randomly generates an
allocation matrix. For resource-constrained clients, it reorganizes a small
number of layers from the original model based on the allocation matrix and
fine-tunes using adapters. Subsequently, the server aggregates the updated
adapter parameters from the clients according to the current allocation matrix
into the corresponding layers of the original model. It is worth noting that
FedRA also supports scenarios where none of the clients can support the entire
global model, which is an impressive advantage. We conduct experiments on two
large-scale image datasets, DomainNet and NICO++, under various non-iid
settings. The results demonstrate that FedRA outperforms the compared methods
significantly. The source code is available at
\url{https://github.com/leondada/FedRA}.
Related papers
- Collaborative and Efficient Personalization with Mixtures of Adaptors [5.195669033269619]
We propose a parameter-efficient framework to tackle multi-task learning problems.
We call our framework Federated Low-Rank Adaptive Learning (FLoRAL)
We show promising experimental results on synthetic datasets and real-world federated multi-task problems.
arXiv Detail & Related papers (2024-10-04T15:11:15Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Federated Adaptive Prompt Tuning for Multi-Domain Collaborative Learning [44.604485649167216]
Federated learning (FL) enables multiple clients to collaboratively train a global model without disclosing their data.
We propose a federated adaptive prompt tuning algorithm, FedAPT, for multi-domain collaborative image classification.
arXiv Detail & Related papers (2022-11-15T03:10:05Z) - FedAvg with Fine Tuning: Local Updates Lead to Representation Learning [54.65133770989836]
Federated Averaging (FedAvg) algorithm consists of alternating between a few local gradient updates at client nodes, followed by a model averaging update at the server.
We show that the reason behind generalizability of the FedAvg's output is its power in learning the common data representation among the clients' tasks.
We also provide empirical evidence demonstrating FedAvg's representation learning ability in federated image classification with heterogeneous data.
arXiv Detail & Related papers (2022-05-27T00:55:24Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - FedNS: Improving Federated Learning for collaborative image
classification on mobile clients [22.980223900446997]
Federated Learning (FL) is a paradigm that aims to support loosely connected clients in learning a global model.
We propose a new approach, termed Federated Node Selection (FedNS), for the server's global model aggregation in the FL setting.
We show with experiments from multiple datasets and networks that FedNS can consistently achieve improved performance over FedAvg.
arXiv Detail & Related papers (2021-01-20T06:45:46Z) - FedBE: Making Bayesian Model Ensemble Applicable to Federated Learning [23.726336635748783]
Federated learning aims to collaboratively train a strong global model by accessing users' locally trained models but not their own data.
A crucial step is therefore to aggregate local models into a global model, which has been shown challenging when users have non-i.i.d. data.
We propose a novel aggregation algorithm named FedBE, which takes a Bayesian inference perspective by sampling higher-quality global models.
arXiv Detail & Related papers (2020-09-04T01:18:25Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.