Personalized Federated Learning via Variational Bayesian Inference
- URL: http://arxiv.org/abs/2206.07977v1
- Date: Thu, 16 Jun 2022 07:37:02 GMT
- Title: Personalized Federated Learning via Variational Bayesian Inference
- Authors: Xu Zhang, Yinchuan Li, Wenpeng Li, Kaiyang Guo, Yunfeng Shao
- Abstract summary: Federated learning faces huge challenges from model overfitting due to the lack of data and statistical diversity among clients.
This paper proposes a novel personalized federated learning method via Bayesian variational inference named pFedBayes.
Experiments show that the proposed method outperforms other advanced personalized methods on personalized models.
- Score: 6.671486716769351
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning faces huge challenges from model overfitting due to the
lack of data and statistical diversity among clients. To address these
challenges, this paper proposes a novel personalized federated learning method
via Bayesian variational inference named pFedBayes. To alleviate the
overfitting, weight uncertainty is introduced to neural networks for clients
and the server. To achieve personalization, each client updates its local
distribution parameters by balancing its construction error over private data
and its KL divergence with global distribution from the server. Theoretical
analysis gives an upper bound of averaged generalization error and illustrates
that the convergence rate of the generalization error is minimax optimal up to
a logarithmic factor. Experiments show that the proposed method outperforms
other advanced personalized methods on personalized models, e.g., pFedBayes
respectively outperforms other SOTA algorithms by 1.25%, 0.42% and 11.71% on
MNIST, FMNIST and CIFAR-10 under non-i.i.d. limited data.
Related papers
- FedSPU: Personalized Federated Learning for Resource-constrained Devices with Stochastic Parameter Update [0.27309692684728615]
Federated Dropout has emerged as a popular strategy to address this challenge.
We propose federated learning with parameter update (FedSPU)
Experimental results demonstrate that FedSPU outperforms federated dropout by 7.57% on average in terms of accuracy.
arXiv Detail & Related papers (2024-03-18T04:31:38Z) - Federated Skewed Label Learning with Logits Fusion [23.062650578266837]
Federated learning (FL) aims to collaboratively train a shared model across multiple clients without transmitting their local data.
We propose FedBalance, which corrects the optimization bias among local models by calibrating their logits.
Our method can gain 13% higher average accuracy compared with state-of-the-art methods.
arXiv Detail & Related papers (2023-11-14T14:37:33Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Adaptive Federated Learning via New Entropy Approach [14.595709494370372]
Federated Learning (FL) has emerged as a prominent distributed machine learning framework.
In this paper, we propose an adaptive FEDerated learning algorithm based on ENTropy theory (FedEnt) to alleviate the parameter deviation among heterogeneous clients.
arXiv Detail & Related papers (2023-03-27T07:57:04Z) - Federated Learning via Variational Bayesian Inference: Personalization,
Sparsity and Clustering [6.829317124629158]
Federated learning (FL) is a promising framework that models distributed machine learning.
FL suffers performance degradation from heterogeneous and limited data.
We present a novel personalized Bayesian FL approach named pFedBayes and a clustered FL model named cFedbayes.
arXiv Detail & Related papers (2023-03-08T02:52:40Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Gradient Masked Averaging for Federated Learning [24.687254139644736]
Federated learning allows a large number of clients with heterogeneous data to coordinate learning of a unified global model.
Standard FL algorithms involve averaging of model parameters or gradient updates to approximate the global model at the server.
We propose a gradient masked averaging approach for FL as an alternative to the standard averaging of client updates.
arXiv Detail & Related papers (2022-01-28T08:42:43Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.