Robust One Round Federated Learning with Predictive Space Bayesian
Inference
- URL: http://arxiv.org/abs/2206.09526v1
- Date: Mon, 20 Jun 2022 01:06:59 GMT
- Title: Robust One Round Federated Learning with Predictive Space Bayesian
Inference
- Authors: Mohsin Hasan, Zehao Zhang, Kaiyang Guo, Mahdi Karami, Guojun Zhang, Xi
Chen, Pascal Poupart
- Abstract summary: We show how the global predictive posterior can be approximated using client predictive posteriors.
We present an algorithm based on this idea, which performs MCMC sampling at each client to obtain an estimate of the local posterior, and then aggregates these in one round to obtain a global ensemble model.
- Score: 19.533268415744338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Making predictions robust is an important challenge. A separate challenge in
federated learning (FL) is to reduce the number of communication rounds,
particularly since doing so reduces performance in heterogeneous data settings.
To tackle both issues, we take a Bayesian perspective on the problem of
learning a global model. We show how the global predictive posterior can be
approximated using client predictive posteriors. This is unlike other works
which aggregate the local model space posteriors into the global model space
posterior, and are susceptible to high approximation errors due to the
posterior's high dimensional multimodal nature. In contrast, our method
performs the aggregation on the predictive posteriors, which are typically
easier to approximate owing to the low-dimensionality of the output space. We
present an algorithm based on this idea, which performs MCMC sampling at each
client to obtain an estimate of the local posterior, and then aggregates these
in one round to obtain a global ensemble model. Through empirical evaluation on
several classification and regression tasks, we show that despite using one
round of communication, the method is competitive with other FL techniques, and
outperforms them on heterogeneous settings. The code is publicly available at
https://github.com/hasanmohsin/FedPredSpace_1Round.
Related papers
- One-Shot Federated Learning with Bayesian Pseudocoresets [19.53527340816458]
We show that distributed function-space inference is tightly related to learning Bayesian pseudocoresets.
We show that this approach achieves prediction performance competitive to state-of-the-art while showing a striking reduction in communication cost of up to two orders of magnitude.
arXiv Detail & Related papers (2024-06-04T10:14:39Z) - Calibrated One Round Federated Learning with Bayesian Inference in the
Predictive Space [27.259110269667826]
Federated Learning (FL) involves training a model over a dataset distributed among clients.
Small and noisy datasets are common, highlighting the need for well-calibrated models.
We propose $beta$-Predictive Bayes, a Bayesian FL algorithm that interpolates between a mixture and product of the predictive posteriors.
arXiv Detail & Related papers (2023-12-15T14:17:16Z) - Is Aggregation the Only Choice? Federated Learning via Layer-wise Model Recombination [33.12164201146458]
We propose a novel and FL paradigm named FedMR (Federated Model Recombination)
The goal of FedMR is to guide the recombined models to be trained towards a flat area.
Compared with state-of-the-art FL methods, FedMR can significantly improve the inference accuracy without exposing privacy of each client.
arXiv Detail & Related papers (2023-05-18T05:58:24Z) - Federated Learning as Variational Inference: A Scalable Expectation
Propagation Approach [66.9033666087719]
This paper extends the inference view and describes a variational inference formulation of federated learning.
We apply FedEP on standard federated learning benchmarks and find that it outperforms strong baselines in terms of both convergence speed and accuracy.
arXiv Detail & Related papers (2023-02-08T17:58:11Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - Content Popularity Prediction Based on Quantized Federated Bayesian
Learning in Fog Radio Access Networks [76.16527095195893]
We investigate the content popularity prediction problem in cache-enabled fog radio access networks (F-RANs)
In order to predict the content popularity with high accuracy and low complexity, we propose a Gaussian process based regressor to model the content request pattern.
We utilize Bayesian learning to train the model parameters, which is robust to overfitting.
arXiv Detail & Related papers (2022-06-23T03:05:12Z) - $\texttt{FedBC}$: Calibrating Global and Local Models via Federated
Learning Beyond Consensus [66.62731854746856]
In federated learning (FL), the objective of collaboratively learning a global model through aggregation of model updates across devices tends to oppose the goal of personalization via local information.
In this work, we calibrate this tradeoff in a quantitative manner through a multi-criterion-based optimization.
We demonstrate that $texttFedBC$ balances the global and local model test accuracy metrics across a suite datasets.
arXiv Detail & Related papers (2022-06-22T02:42:04Z) - Correlation Clustering Reconstruction in Semi-Adversarial Models [70.11015369368272]
Correlation Clustering is an important clustering problem with many applications.
We study the reconstruction version of this problem in which one is seeking to reconstruct a latent clustering corrupted by random noise and adversarial modifications.
arXiv Detail & Related papers (2021-08-10T14:46:17Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Federated Learning via Posterior Averaging: A New Perspective and
Practical Algorithms [21.11885845002748]
We present an alternative perspective and formulate federated learning as a posterior inference problem.
The goal is to infer a global posterior distribution by having client devices each infer the posterior of their local data.
While exact inference is often intractable, this perspective provides a principled way to search for global optima in federated settings.
arXiv Detail & Related papers (2020-10-11T15:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.