Personalization Disentanglement for Federated Learning: An explainable
perspective
- URL: http://arxiv.org/abs/2306.03570v2
- Date: Thu, 13 Jul 2023 06:55:56 GMT
- Title: Personalization Disentanglement for Federated Learning: An explainable
perspective
- Authors: Peng Yan, Guodong Long
- Abstract summary: This paper addresses PFL via explicit disentangling latent representations into two parts to capture the shared knowledge and client-specific personalization.
The disentanglement is achieved by a novel Federated Dual Variational Autoencoder (FedDVA), which employs two encoders to infer the two types of representations.
- Score: 28.780213981859514
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized federated learning (PFL) jointly trains a variety of local
models through balancing between knowledge sharing across clients and model
personalization per client. This paper addresses PFL via explicit disentangling
latent representations into two parts to capture the shared knowledge and
client-specific personalization, which leads to more reliable and effective
PFL. The disentanglement is achieved by a novel Federated Dual Variational
Autoencoder (FedDVA), which employs two encoders to infer the two types of
representations. FedDVA can produce a better understanding of the trade-off
between global knowledge sharing and local personalization in PFL. Moreover, it
can be integrated with existing FL methods and turn them into personalized
models for heterogeneous downstream tasks. Extensive experiments validate the
advantages caused by disentanglement and show that models trained with
disentangled representations substantially outperform those vanilla methods.
Related papers
- Spectral Co-Distillation for Personalized Federated Learning [69.97016362754319]
We propose a novel distillation method based on model spectrum information to better capture generic versus personalized representations.
We also introduce a co-distillation framework that establishes a two-way bridge between generic and personalized model training.
We demonstrate the outperformance and efficacy of our proposed spectral co-distillation method, as well as our wait-free training protocol.
arXiv Detail & Related papers (2024-01-29T16:01:38Z) - FediOS: Decoupling Orthogonal Subspaces for Personalization in
Feature-skew Federated Learning [6.076894295435773]
In personalized federated learning (pFL), clients may have heterogeneous (also known as non-IID) data.
In FediOS, we reformulate the decoupling into two feature extractors (generic and personalized) and one shared prediction head.
In addition, a shared prediction head is trained to balance the importance of generic and personalized features during inference.
arXiv Detail & Related papers (2023-11-30T13:50:38Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - Federated Learning of Shareable Bases for Personalization-Friendly Image
Classification [54.72892987840267]
FedBasis learns a set of few shareable basis'' models, which can be linearly combined to form personalized models for clients.
Specifically for a new client, only a small set of combination coefficients, not the model weights, needs to be learned.
To demonstrate the effectiveness and applicability of FedBasis, we also present a more practical PFL testbed for image classification.
arXiv Detail & Related papers (2023-04-16T20:19:18Z) - Personalized Federated Learning on Long-Tailed Data via Adversarial
Feature Augmentation [24.679535905451758]
PFL aims to learn personalized models for each client based on the knowledge across all clients in a privacy-preserving manner.
Existing PFL methods assume that the underlying global data across all clients are uniformly distributed without considering the long-tail distribution.
We propose Federated Learning with Adversarial Feature Augmentation (FedAFA) to address this joint problem in PFL.
arXiv Detail & Related papers (2023-03-27T13:00:20Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - Group Personalized Federated Learning [15.09115201646396]
Federated learning (FL) can help promote data privacy by training a shared model in a de-centralized manner on the physical devices of clients.
In this paper, we present the group personalization approach for applications of FL.
arXiv Detail & Related papers (2022-10-04T19:20:19Z) - Federated Learning from Pre-Trained Models: A Contrastive Learning
Approach [43.893267526525904]
Federated Learning (FL) is a machine learning paradigm that allows decentralized clients to learn collaboratively without sharing their private data.
Excessive computation and communication demands pose challenges to current FL frameworks.
We propose a lightweight framework where clients jointly learn to fuse the representations generated by multiple fixed pre-trained models.
arXiv Detail & Related papers (2022-09-21T03:16:57Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.