Personalized Federated Learning of Probabilistic Models: A PAC-Bayesian
Approach
- URL: http://arxiv.org/abs/2401.08351v1
- Date: Tue, 16 Jan 2024 13:30:37 GMT
- Title: Personalized Federated Learning of Probabilistic Models: A PAC-Bayesian
Approach
- Authors: Mahrokh Ghoddousi Boroujeni, Andreas Krause, Giancarlo Ferrari Trecate
- Abstract summary: Federated learning aims to infer a shared model from private and decentralized data stored locally by multiple clients.
We propose a PFL algorithm named PAC-PFL for learning probabilistic models within a PAC-Bayesian framework.
Our algorithm collaboratively learns a shared hyper-posterior and regards each client's posterior inference as the step personalization.
- Score: 42.59649764999974
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning aims to infer a shared model from private and
decentralized data stored locally by multiple clients. Personalized federated
learning (PFL) goes one step further by adapting the global model to each
client, enhancing the model's fit for different clients. A significant level of
personalization is required for highly heterogeneous clients, but can be
challenging to achieve especially when they have small datasets. To address
this problem, we propose a PFL algorithm named PAC-PFL for learning
probabilistic models within a PAC-Bayesian framework that utilizes differential
privacy to handle data-dependent priors. Our algorithm collaboratively learns a
shared hyper-posterior and regards each client's posterior inference as the
personalization step. By establishing and minimizing a generalization bound on
the average true risk of clients, PAC-PFL effectively combats over-fitting.
PACPFL achieves accurate and well-calibrated predictions, supported by
experiments on a dataset of photovoltaic panel power generation, FEMNIST
dataset (Caldas et al., 2019), and Dirichlet-partitioned EMNIST dataset (Cohen
et al., 2017).
Related papers
- FedAPA: Server-side Gradient-Based Adaptive Personalized Aggregation for Federated Learning on Heterogeneous Data [5.906966694759679]
FedAPA is a novel PFL method featuring a server-side, gradient-based adaptive aggregation strategy to generate personalized models.
FedAPA guarantees theoretical convergence and achieves superior accuracy and computational efficiency compared to 10 PFL competitors across three datasets.
arXiv Detail & Related papers (2025-02-11T11:00:58Z) - Look Back for More: Harnessing Historical Sequential Updates for Personalized Federated Adapter Tuning [50.45027483522507]
Existing personalized federated learning (PFL) approaches rely solely on the clients' latest updated models.
We propose pFedSeq, designed for personalizing adapters to fine-tune a foundation model in FL.
In pFedSeq, the server maintains and trains a sequential learner, which processes a sequence of past adapter updates from clients.
To effectively capture the cross-client and cross-step relations hidden in previous updates, pFedSeq adopts the powerful selective state space model.
arXiv Detail & Related papers (2025-01-03T06:10:09Z) - FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Confidence-aware Personalized Federated Learning via Variational
Expectation Maximization [34.354154518009956]
We present a novel framework for personalized Federated Learning (PFL)
PFL is a distributed learning scheme to train a shared model across clients.
We present a novel framework for PFL based on hierarchical modeling and variational inference.
arXiv Detail & Related papers (2023-05-21T20:12:27Z) - FedDWA: Personalized Federated Learning with Dynamic Weight Adjustment [20.72576355616359]
We propose a new PFL algorithm called emphFedDWA (Federated Learning with Dynamic Weight Adjustment) to address the problem.
FedDWA computes personalized aggregation weights based on collected models from clients.
We conduct extensive experiments using five real datasets and the results demonstrate that FedDWA can significantly reduce the communication traffic and achieve much higher model accuracy than the state-of-the-art approaches.
arXiv Detail & Related papers (2023-05-10T13:12:07Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Federated Learning of Shareable Bases for Personalization-Friendly Image
Classification [54.72892987840267]
FedBasis learns a set of few shareable basis'' models, which can be linearly combined to form personalized models for clients.
Specifically for a new client, only a small set of combination coefficients, not the model weights, needs to be learned.
To demonstrate the effectiveness and applicability of FedBasis, we also present a more practical PFL testbed for image classification.
arXiv Detail & Related papers (2023-04-16T20:19:18Z) - Personalized Federated Learning on Long-Tailed Data via Adversarial
Feature Augmentation [24.679535905451758]
PFL aims to learn personalized models for each client based on the knowledge across all clients in a privacy-preserving manner.
Existing PFL methods assume that the underlying global data across all clients are uniformly distributed without considering the long-tail distribution.
We propose Federated Learning with Adversarial Feature Augmentation (FedAFA) to address this joint problem in PFL.
arXiv Detail & Related papers (2023-03-27T13:00:20Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - Personalized Privacy-Preserving Framework for Cross-Silo Federated
Learning [0.0]
Federated learning (FL) is a promising decentralized deep learning (DL) framework that enables DL-based approaches trained collaboratively across clients without sharing private data.
In this paper, we propose a novel framework, namely Personalized Privacy-Preserving Federated Learning (PPPFL)
Our proposed framework outperforms multiple FL baselines on different datasets, including MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2023-02-22T07:24:08Z) - Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives [37.42347737911428]
We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
arXiv Detail & Related papers (2022-08-12T09:51:20Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z) - PFL-MoE: Personalized Federated Learning Based on Mixture of Experts [1.8757823231879849]
Federated learning (FL) avoids data sharing among training nodes so as to protect data privacy.
PFL-MoE is a generic approach and can be instantiated by integrating existing PFL algorithms.
We demonstrate the effectiveness of PFL-MoE by training the LeNet-5 and VGG-16 models on the Fashion-MNIST datasets.
arXiv Detail & Related papers (2020-12-31T12:51:14Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.