FedDWA: Personalized Federated Learning with Dynamic Weight Adjustment
- URL: http://arxiv.org/abs/2305.06124v3
- Date: Sun, 16 Jul 2023 09:06:41 GMT
- Title: FedDWA: Personalized Federated Learning with Dynamic Weight Adjustment
- Authors: Jiahao Liu, Jiang Wu, Jinyu Chen, Miao Hu, Yipeng Zhou, Di Wu
- Abstract summary: We propose a new PFL algorithm called emphFedDWA (Federated Learning with Dynamic Weight Adjustment) to address the problem.
FedDWA computes personalized aggregation weights based on collected models from clients.
We conduct extensive experiments using five real datasets and the results demonstrate that FedDWA can significantly reduce the communication traffic and achieve much higher model accuracy than the state-of-the-art approaches.
- Score: 20.72576355616359
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Different from conventional federated learning, personalized federated
learning (PFL) is able to train a customized model for each individual client
according to its unique requirement. The mainstream approach is to adopt a kind
of weighted aggregation method to generate personalized models, in which
weights are determined by the loss value or model parameters among different
clients. However, such kinds of methods require clients to download others'
models. It not only sheer increases communication traffic but also potentially
infringes data privacy. In this paper, we propose a new PFL algorithm called
\emph{FedDWA (Federated Learning with Dynamic Weight Adjustment)} to address
the above problem, which leverages the parameter server (PS) to compute
personalized aggregation weights based on collected models from clients. In
this way, FedDWA can capture similarities between clients with much less
communication overhead. More specifically, we formulate the PFL problem as an
optimization problem by minimizing the distance between personalized models and
guidance models, so as to customize aggregation weights for each client.
Guidance models are obtained by the local one-step ahead adaptation on
individual clients. Finally, we conduct extensive experiments using five real
datasets and the results demonstrate that FedDWA can significantly reduce the
communication traffic and achieve much higher model accuracy than the
state-of-the-art approaches.
Related papers
- MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes [49.22075916259368]
In some real-world applications, data samples are usually distributed on local devices.
In this paper, we focus on a special kind of Non-I.I.D. scene where clients own incomplete classes.
Our proposed algorithm named MAP could simultaneously achieve the aggregation and personalization goals in FL.
arXiv Detail & Related papers (2024-04-14T12:22:42Z) - DA-PFL: Dynamic Affinity Aggregation for Personalized Federated Learning [13.393529840544117]
Existing personalized federated learning models prefer to aggregate similar clients with similar data distribution to improve the performance of learning models.
We propose a novel Dynamic Affinity-based Personalized Federated Learning model (DA-PFL) to alleviate the class imbalanced problem.
arXiv Detail & Related papers (2024-03-14T11:12:10Z) - Personalized Federated Learning of Probabilistic Models: A PAC-Bayesian
Approach [42.59649764999974]
Federated learning aims to infer a shared model from private and decentralized data stored locally by multiple clients.
We propose a PFL algorithm named PAC-PFL for learning probabilistic models within a PAC-Bayesian framework.
Our algorithm collaboratively learns a shared hyper-posterior and regards each client's posterior inference as the step personalization.
arXiv Detail & Related papers (2024-01-16T13:30:37Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Federated Learning of Shareable Bases for Personalization-Friendly Image
Classification [54.72892987840267]
FedBasis learns a set of few shareable basis'' models, which can be linearly combined to form personalized models for clients.
Specifically for a new client, only a small set of combination coefficients, not the model weights, needs to be learned.
To demonstrate the effectiveness and applicability of FedBasis, we also present a more practical PFL testbed for image classification.
arXiv Detail & Related papers (2023-04-16T20:19:18Z) - Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives [37.42347737911428]
We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
arXiv Detail & Related papers (2022-08-12T09:51:20Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - Parameterized Knowledge Transfer for Personalized Federated Learning [11.223753730705374]
We propose a novel training framework to employ personalized models for different clients.
It is demonstrated that the proposed framework is the first federated learning paradigm that realizes personalized model training.
arXiv Detail & Related papers (2021-11-04T13:41:45Z) - QuPeD: Quantized Personalization via Distillation with Applications to
Federated Learning [8.420943739336067]
federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server.
We introduce a textitquantized and textitpersonalized FL algorithm QuPeD that facilitates collective (personalized model compression) training.
Numerically, we validate that QuPeD outperforms competing personalized FL methods, FedAvg, and local training of clients in various heterogeneous settings.
arXiv Detail & Related papers (2021-07-29T10:55:45Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.