Federated Bayesian Neural Regression: A Scalable Global Federated
Gaussian Process
- URL: http://arxiv.org/abs/2206.06357v1
- Date: Mon, 13 Jun 2022 17:52:58 GMT
- Title: Federated Bayesian Neural Regression: A Scalable Global Federated
Gaussian Process
- Authors: Haolin Yu, Kaiyang Guo, Mahdi Karami, Xi Chen, Guojun Zhang, Pascal
Poupart
- Abstract summary: Federated Bayesian Neural Regression (FedBNR) is an algorithm that learns a scalable stand-alone global GP that respects clients' privacy.
We derive a principled approach of learning a global predictive model as if all client data is centralized.
Experiments are conducted on real-world regression datasets and show statistically significant improvements compared to other federated GP models.
- Score: 21.872163101238705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In typical scenarios where the Federated Learning (FL) framework applies, it
is common for clients to have insufficient training data to produce an accurate
model. Thus, models that provide not only point estimations, but also some
notion of confidence are beneficial. Gaussian Process (GP) is a powerful
Bayesian model that comes with naturally well-calibrated variance estimations.
However, it is challenging to learn a stand-alone global GP since merging local
kernels leads to privacy leakage. To preserve privacy, previous works that
consider federated GPs avoid learning a global model by focusing on the
personalized setting or learning an ensemble of local models. We present
Federated Bayesian Neural Regression (FedBNR), an algorithm that learns a
scalable stand-alone global federated GP that respects clients' privacy. We
incorporate deep kernel learning and random features for scalability by
defining a unifying random kernel. We show this random kernel can recover any
stationary kernel and many non-stationary kernels. We then derive a principled
approach of learning a global predictive model as if all client data is
centralized. We also learn global kernels with knowledge distillation methods
for non-identically and independently distributed (non-i.i.d.) clients.
Experiments are conducted on real-world regression datasets and show
statistically significant improvements compared to other federated GP models.
Related papers
- FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Federated Learning with Neural Graphical Models [2.2721854258621064]
Federated Learning (FL) addresses the need to create models based on proprietary data.
We develop a FL framework which maintains a global NGM model that learns the averaged information from the local NGM models.
We experimentally demonstrated the use of FedNGMs for extracting insights from CDC's Infant Mortality dataset.
arXiv Detail & Related papers (2023-09-20T23:24:22Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Confidence-aware Personalized Federated Learning via Variational
Expectation Maximization [34.354154518009956]
We present a novel framework for personalized Federated Learning (PFL)
PFL is a distributed learning scheme to train a shared model across clients.
We present a novel framework for PFL based on hierarchical modeling and variational inference.
arXiv Detail & Related papers (2023-05-21T20:12:27Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Gradient Masked Averaging for Federated Learning [24.687254139644736]
Federated learning allows a large number of clients with heterogeneous data to coordinate learning of a unified global model.
Standard FL algorithms involve averaging of model parameters or gradient updates to approximate the global model at the server.
We propose a gradient masked averaging approach for FL as an alternative to the standard averaging of client updates.
arXiv Detail & Related papers (2022-01-28T08:42:43Z) - Federated Gaussian Process: Convergence, Automatic Personalization and
Multi-fidelity Modeling [4.18804572788063]
We show that textttFGPR is a promising approach for privacy-preserving multi-fidelity data modeling.
We show that textttFGPR excels in a wide range of applications and is a promising approach for privacy-preserving multi-fidelity data modeling.
arXiv Detail & Related papers (2021-11-28T00:17:31Z) - Personalized Federated Learning with Gaussian Processes [24.102107455189454]
Federated learning aims to learn a global model that performs well on client devices with limited cross-client communication.
We present pFedGP, a solution to PFL that is based on Gaussian processes (GPs) with deep kernel learning.
pFedGP achieves well-calibrated predictions while significantly outperforming baseline methods, reaching up to 21% in accuracy gain.
arXiv Detail & Related papers (2021-06-29T15:09:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.