Trustworthy Personalized Bayesian Federated Learning via Posterior
Fine-Tune
- URL: http://arxiv.org/abs/2402.16911v1
- Date: Sun, 25 Feb 2024 13:28:08 GMT
- Title: Trustworthy Personalized Bayesian Federated Learning via Posterior
Fine-Tune
- Authors: Mengen Luo, Chi Xu, Ercan Engin Kuruoglu
- Abstract summary: We introduce a novel framework for personalized federated learning, incorporating Bayesian methodology.
We show that the new algorithm not only improves accuracy but also outperforms the baseline significantly in OOD detection.
- Score: 3.1001287855313966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Performance degradation owing to data heterogeneity and low output
interpretability are the most significant challenges faced by federated
learning in practical applications. Personalized federated learning diverges
from traditional approaches, as it no longer seeks to train a single model, but
instead tailors a unique personalized model for each client. However, previous
work focused only on personalization from the perspective of neural network
parameters and lack of robustness and interpretability. In this work, we
establish a novel framework for personalized federated learning, incorporating
Bayesian methodology which enhances the algorithm's ability to quantify
uncertainty. Furthermore, we introduce normalizing flow to achieve
personalization from the parameter posterior perspective and theoretically
analyze the impact of normalizing flow on out-of-distribution (OOD) detection
for Bayesian neural networks. Finally, we evaluated our approach on
heterogeneous datasets, and the experimental results indicate that the new
algorithm not only improves accuracy but also outperforms the baseline
significantly in OOD detection due to the reliable output of the Bayesian
approach.
Related papers
- Refining 3D Point Cloud Normal Estimation via Sample Selection [13.207964615561261]
We introduce a fundamental framework for normal estimation, enhancing existing model through the incorporation of global information and various constraint mechanisms.
We also utilize existing orientation methods to correct estimated non-oriented normals, achieving state-of-the-art performance in both oriented and non-oriented tasks.
arXiv Detail & Related papers (2024-05-20T02:06:10Z) - Bayesian Neural Network For Personalized Federated Learning Parameter
Selection [2.130283000112442]
Federated learning's poor performance in the presence of heterogeneous data remains one of the most pressing issues in the field.
In this work, we take a step further by proposing personalization at the elemental level, rather than the traditional layer-level personalization.
We validate our algorithm's efficacy on several real-world datasets, demonstrating that our proposed approach outperforms existing baselines.
arXiv Detail & Related papers (2024-02-25T13:37:53Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Personalized Federated Learning via Variational Bayesian Inference [6.671486716769351]
Federated learning faces huge challenges from model overfitting due to the lack of data and statistical diversity among clients.
This paper proposes a novel personalized federated learning method via Bayesian variational inference named pFedBayes.
Experiments show that the proposed method outperforms other advanced personalized methods on personalized models.
arXiv Detail & Related papers (2022-06-16T07:37:02Z) - Federated Learning with Uncertainty via Distilled Predictive
Distributions [14.828509220023387]
We present a framework for federated learning with uncertainty where, in each round, each client infers the posterior distribution over its parameters as well as the posterior predictive distribution (PPD)
Unlike some of the recent Bayesian approaches to federated learning, our approach does not require sending the whole posterior distribution of the parameters from each client to the server.
Our approach does not make any restrictive assumptions, such as the form of the clients' posterior distributions, or of their PPDs.
arXiv Detail & Related papers (2022-06-15T14:24:59Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Last Layer Marginal Likelihood for Invariance Learning [12.00078928875924]
We introduce a new lower bound to the marginal likelihood, which allows us to perform inference for a larger class of likelihood functions.
We work towards bringing this approach to neural networks by using an architecture with a Gaussian process in the last layer.
arXiv Detail & Related papers (2021-06-14T15:40:51Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.