A Bayesian Federated Learning Framework with Online Laplace
Approximation
- URL: http://arxiv.org/abs/2102.01936v3
- Date: Sat, 2 Dec 2023 07:13:00 GMT
- Title: A Bayesian Federated Learning Framework with Online Laplace
Approximation
- Authors: Liangxi Liu, Xi Jiang, Feng Zheng, Hong Chen, Guo-Jun Qi, Heng Huang
and Ling Shao
- Abstract summary: Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
- Score: 144.7345013348257
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated learning (FL) allows multiple clients to collaboratively learn a
globally shared model through cycles of model aggregation and local model
training, without the need to share data. Most existing FL methods train local
models separately on different clients, and then simply average their
parameters to obtain a centralized model on the server side. However, these
approaches generally suffer from large aggregation errors and severe local
forgetting, which are particularly bad in heterogeneous data settings. To
tackle these issues, in this paper, we propose a novel FL framework that uses
online Laplace approximation to approximate posteriors on both the client and
server side. On the server side, a multivariate Gaussian product mechanism is
employed to construct and maximize a global posterior, largely reducing the
aggregation errors induced by large discrepancies between local models. On the
client side, a prior loss that uses the global posterior probabilistic
parameters delivered from the server is designed to guide the local training.
Binding such learning constraints from other clients enables our method to
mitigate local forgetting. Finally, we achieve state-of-the-art results on
several benchmarks, clearly demonstrating the advantages of the proposed
method.
Related papers
- Regularizing and Aggregating Clients with Class Distribution for Personalized Federated Learning [0.8287206589886879]
Class-wise Federated Averaging (cwFedAVG) class-wise, creating multiple global models per class on the server.
Each local model integrates these global models weighted by its estimated local class distribution, derived from the L2-norms of deep network weights.
We also newly designed Weight Distribution Regularizer (WDR) to further enhance the accuracy of estimating a local class distribution.
arXiv Detail & Related papers (2024-06-12T01:32:24Z) - Federated Skewed Label Learning with Logits Fusion [23.062650578266837]
Federated learning (FL) aims to collaboratively train a shared model across multiple clients without transmitting their local data.
We propose FedBalance, which corrects the optimization bias among local models by calibrating their logits.
Our method can gain 13% higher average accuracy compared with state-of-the-art methods.
arXiv Detail & Related papers (2023-11-14T14:37:33Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Federated Learning for Semantic Parsing: Task Formulation, Evaluation
Setup, New Algorithms [29.636944156801327]
Multiple clients collaboratively train one global model without sharing their semantic parsing data.
Lorar adjusts each client's contribution to the global model update based on its training loss reduction during each round.
Clients with smaller datasets enjoy larger performance gains.
arXiv Detail & Related papers (2023-05-26T19:25:49Z) - Is Aggregation the Only Choice? Federated Learning via Layer-wise Model Recombination [33.12164201146458]
We propose a novel and FL paradigm named FedMR (Federated Model Recombination)
The goal of FedMR is to guide the recombined models to be trained towards a flat area.
Compared with state-of-the-art FL methods, FedMR can significantly improve the inference accuracy without exposing privacy of each client.
arXiv Detail & Related papers (2023-05-18T05:58:24Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Federated Multi-Task Learning under a Mixture of Distributions [10.00087964926414]
Federated Learning (FL) is a framework for on-device collaborative training of machine learning models.
First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client.
We study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions.
arXiv Detail & Related papers (2021-08-23T15:47:53Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.