Federated Gaussian Process: Convergence, Automatic Personalization and
Multi-fidelity Modeling
- URL: http://arxiv.org/abs/2111.14008v1
- Date: Sun, 28 Nov 2021 00:17:31 GMT
- Title: Federated Gaussian Process: Convergence, Automatic Personalization and
Multi-fidelity Modeling
- Authors: Xubo Yue, Raed Al Kontar
- Abstract summary: We show that textttFGPR is a promising approach for privacy-preserving multi-fidelity data modeling.
We show that textttFGPR excels in a wide range of applications and is a promising approach for privacy-preserving multi-fidelity data modeling.
- Score: 4.18804572788063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose \texttt{FGPR}: a Federated Gaussian process
($\mathcal{GP}$) regression framework that uses an averaging strategy for model
aggregation and stochastic gradient descent for local client computations.
Notably, the resulting global model excels in personalization as \texttt{FGPR}
jointly learns a global $\mathcal{GP}$ prior across all clients. The predictive
posterior then is obtained by exploiting this prior and conditioning on local
data which encodes personalized features from a specific client. Theoretically,
we show that \texttt{FGPR} converges to a critical point of the full
log-likelihood function, subject to statistical error. Through extensive case
studies we show that \texttt{FGPR} excels in a wide range of applications and
is a promising approach for privacy-preserving multi-fidelity data modeling.
Related papers
- FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Federated Variational Inference: Towards Improved Personalization and
Generalization [2.37589914835055]
We study personalization and generalization in stateless cross-device federated learning setups.
We first propose a hierarchical generative model and formalize it using Bayesian Inference.
We then approximate this process using Variational Inference to train our model efficiently.
We evaluate our model on FEMNIST and CIFAR-100 image classification and show that FedVI beats the state-of-the-art on both tasks.
arXiv Detail & Related papers (2023-05-23T04:28:07Z) - FedHB: Hierarchical Bayesian Federated Learning [11.936836827864095]
We propose a novel hierarchical Bayesian approach to Federated Learning (FL)
Our model reasonably describes the generative process of clients' local data via hierarchical Bayesian modeling.
We show that our block-coordinate FL algorithm converges to an optimum of the objective at the rate of $O(sqrtt)$.
arXiv Detail & Related papers (2023-05-08T18:21:41Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Robust One Round Federated Learning with Predictive Space Bayesian
Inference [19.533268415744338]
We show how the global predictive posterior can be approximated using client predictive posteriors.
We present an algorithm based on this idea, which performs MCMC sampling at each client to obtain an estimate of the local posterior, and then aggregates these in one round to obtain a global ensemble model.
arXiv Detail & Related papers (2022-06-20T01:06:59Z) - Global Convergence of Federated Learning for Mixed Regression [17.8469597916875]
This paper studies the problem of model training under Federated Learning when clients exhibit cluster structure.
Key innovation in our analysis is a uniform estimate on clustering, which we prove by bounding the VC dimension by bounding the general concept classes.
arXiv Detail & Related papers (2022-06-15T03:38:42Z) - Federated Bayesian Neural Regression: A Scalable Global Federated
Gaussian Process [21.872163101238705]
Federated Bayesian Neural Regression (FedBNR) is an algorithm that learns a scalable stand-alone global GP that respects clients' privacy.
We derive a principled approach of learning a global predictive model as if all client data is centralized.
Experiments are conducted on real-world regression datasets and show statistically significant improvements compared to other federated GP models.
arXiv Detail & Related papers (2022-06-13T17:52:58Z) - FedAvg with Fine Tuning: Local Updates Lead to Representation Learning [54.65133770989836]
Federated Averaging (FedAvg) algorithm consists of alternating between a few local gradient updates at client nodes, followed by a model averaging update at the server.
We show that the reason behind generalizability of the FedAvg's output is its power in learning the common data representation among the clients' tasks.
We also provide empirical evidence demonstrating FedAvg's representation learning ability in federated image classification with heterogeneous data.
arXiv Detail & Related papers (2022-05-27T00:55:24Z) - DP-NormFedAvg: Normalizing Client Updates for Privacy-Preserving
Federated Learning [48.064786028195506]
We propose to have the clients send a textitfin quantized version of only the textitunit in terms of magnitude information.
We also introduce QTDL, a new differentially private quantization mechanism for unitnorm.
arXiv Detail & Related papers (2021-06-13T21:23:46Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.