FedPop: A Bayesian Approach for Personalised Federated Learning
- URL: http://arxiv.org/abs/2206.03611v1
- Date: Tue, 7 Jun 2022 22:52:59 GMT
- Title: FedPop: A Bayesian Approach for Personalised Federated Learning
- Authors: Nikita Kotelevskii and Maxime Vono and Eric Moulines and Alain Durmus
- Abstract summary: Personalised federated learning aims at collaboratively learning a machine learning model taylored for each client.
We propose a novel methodology coined FedPop by recasting personalised FL into the population modeling paradigm.
Compared to existing personalised FL methods, the proposed methodology has important benefits: it is robust to client drift, practical for inference on new clients, and above all, enables uncertainty quantification under mild computational and memory overheads.
- Score: 25.67466138369391
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalised federated learning (FL) aims at collaboratively learning a
machine learning model taylored for each client. Albeit promising advances have
been made in this direction, most of existing approaches works do not allow for
uncertainty quantification which is crucial in many applications. In addition,
personalisation in the cross-device setting still involves important issues,
especially for new clients or those having small number of observations. This
paper aims at filling these gaps. To this end, we propose a novel methodology
coined FedPop by recasting personalised FL into the population modeling
paradigm where clients' models involve fixed common population parameters and
random effects, aiming at explaining data heterogeneity. To derive convergence
guarantees for our scheme, we introduce a new class of federated stochastic
optimisation algorithms which relies on Markov chain Monte Carlo methods.
Compared to existing personalised FL methods, the proposed methodology has
important benefits: it is robust to client drift, practical for inference on
new clients, and above all, enables uncertainty quantification under mild
computational and memory overheads. We provide non-asymptotic convergence
guarantees for the proposed algorithms and illustrate their performances on
various personalised federated learning tasks.
Related papers
- Submodular Maximization Approaches for Equitable Client Selection in Federated Learning [4.167345675621377]
In a conventional Learning framework, client selection for training typically involves the random sampling of a subset of clients in each iteration.
This paper introduces two novel methods, namely SUBTRUNC and UNIONFL, designed to address the limitations of random client selection.
arXiv Detail & Related papers (2024-08-24T22:40:31Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - QMGeo: Differentially Private Federated Learning via Stochastic Quantization with Mixed Truncated Geometric Distribution [1.565361244756411]
Federated learning (FL) is a framework which allows multiple users to jointly train a global machine learning (ML) model.
One key motivation of such distributed frameworks is to provide privacy guarantees to the users.
We present a novel quantization method, utilizing a mixed geometric distribution to introduce the randomness needed to provide DP.
arXiv Detail & Related papers (2023-12-10T04:44:53Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - FedRC: Tackling Diverse Distribution Shifts Challenge in Federated Learning by Robust Clustering [4.489171618387544]
Federated Learning (FL) is a machine learning paradigm that safeguards privacy by retaining client data on edge devices.
In this paper, we identify the learning challenges posed by the simultaneous occurrence of diverse distribution shifts.
We propose a novel clustering algorithm framework, dubbed as FedRC, which adheres to our proposed clustering principle.
arXiv Detail & Related papers (2023-01-29T06:50:45Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Communication-Efficient Agnostic Federated Averaging [39.761808414613185]
In distributed learning settings, the training algorithm can be potentially biased towards different clients.
We propose a communication-efficient distributed algorithm called Agnostic Federated Averaging (or AgnosticFedAvg) to minimize the domain-agnostic objective proposed in Mohri et al.
arXiv Detail & Related papers (2021-04-06T19:01:18Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.