FedBiP: Heterogeneous One-Shot Federated Learning with Personalized   Latent Diffusion Models
        - URL: http://arxiv.org/abs/2410.04810v1
 - Date: Mon, 7 Oct 2024 07:45:18 GMT
 - Title: FedBiP: Heterogeneous One-Shot Federated Learning with Personalized   Latent Diffusion Models
 - Authors: Haokun Chen, Hang Li, Yao Zhang, Gengyuan Zhang, Jinhe Bi, Philip Torr, Jindong Gu, Denis Krompass, Volker Tresp, 
 - Abstract summary: One-Shot Federated Learning (OSFL), a special decentralized machine learning paradigm, has recently gained significant attention.
Current methods face challenges due to client data heterogeneity and limited data quantity when applied to real-world OSFL systems.
We propose Federated Bi-Level Personalization (FedBiP), which personalizes the pretrained LDM at both instance-level and concept-level.
 - Score: 37.76576626976729
 - License: http://creativecommons.org/licenses/by/4.0/
 - Abstract:   One-Shot Federated Learning (OSFL), a special decentralized machine learning paradigm, has recently gained significant attention. OSFL requires only a single round of client data or model upload, which reduces communication costs and mitigates privacy threats compared to traditional FL. Despite these promising prospects, existing methods face challenges due to client data heterogeneity and limited data quantity when applied to real-world OSFL systems. Recently, Latent Diffusion Models (LDM) have shown remarkable advancements in synthesizing high-quality images through pretraining on large-scale datasets, thereby presenting a potential solution to overcome these issues. However, directly applying pretrained LDM to heterogeneous OSFL results in significant distribution shifts in synthetic data, leading to performance degradation in classification models trained on such data. This issue is particularly pronounced in rare domains, such as medical imaging, which are underrepresented in LDM's pretraining data. To address this challenge, we propose Federated Bi-Level Personalization (FedBiP), which personalizes the pretrained LDM at both instance-level and concept-level. Hereby, FedBiP synthesizes images following the client's local data distribution without compromising the privacy regulations. FedBiP is also the first approach to simultaneously address feature space heterogeneity and client data scarcity in OSFL. Our method is validated through extensive experiments on three OSFL benchmarks with feature space heterogeneity, as well as on challenging medical and satellite image datasets with label heterogeneity. The results demonstrate the effectiveness of FedBiP, which substantially outperforms other OSFL methods. 
 
       
      
        Related papers
        - FedP3E: Privacy-Preserving Prototype Exchange for Non-IID IoT Malware   Detection in Cross-Silo Federated Learning [5.7494612007431805]
We propose FedP3E, a novel FL framework that supports indirect cross-client representation sharing while maintaining data privacy.<n>We evaluate FedP3E on the N-BaIoT dataset under realistic cross-silo scenarios with varying degrees of data imbalance.
arXiv  Detail & Related papers  (2025-07-09T20:07:35Z) - FedMAP: Unlocking Potential in Personalized Federated Learning through   Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv  Detail & Related papers  (2024-05-29T11:28:06Z) - Stable Diffusion-based Data Augmentation for Federated Learning with   Non-IID Data [9.045647166114916]
Federated Learning (FL) is a promising paradigm for decentralized and collaborative model training.
FL struggles with a significant performance reduction and poor convergence when confronted with Non-Independent and Identically Distributed (Non-IID) data distributions.
We introduce Gen-FedSD, a novel approach that harnesses the powerful capability of state-of-the-art text-to-image foundation models.
arXiv  Detail & Related papers  (2024-05-13T16:57:48Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv  Detail & Related papers  (2024-04-29T05:55:23Z) - Distributionally Robust Alignment for Medical Federated Vision-Language   Pre-training Under Data Heterogeneity [4.84693589377679]
We propose Federated Distributionally Robust Alignment (FedDRA) for medical vision-language pre-training.
FedDRA achieves robust vision-language alignment under heterogeneous conditions.
Our method also adapts well to various medical pre-training methods.
arXiv  Detail & Related papers  (2024-04-05T01:17:25Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
  Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv  Detail & Related papers  (2023-12-10T18:49:59Z) - One-Shot Federated Learning with Classifier-Guided Diffusion Models [44.604485649167216]
One-shot federated learning (OSFL) has gained attention in recent years due to its low communication cost.
In this paper, we explore the novel opportunities that diffusion models bring to OSFL and propose FedCADO.
FedCADO generates data that complies with clients' distributions and subsequently training the aggregated model on the server.
arXiv  Detail & Related papers  (2023-11-15T11:11:25Z) - Exploring One-shot Semi-supervised Federated Learning with A Pre-trained   Diffusion Model [40.83058938096914]
We propose FedDISC, a Federated Diffusion-Inspired Semi-supervised Co-training method.
We first extract prototypes of the labeled server data and use these prototypes to predict pseudo-labels of the client data.
For each category, we compute the cluster centroids and domain-specific representations to signify the semantic and stylistic information of their distributions.
These representations are sent back to the server, which uses the pre-trained to generate synthetic datasets complying with the client distributions and train a global model on it.
arXiv  Detail & Related papers  (2023-05-06T14:22:33Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
 Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv  Detail & Related papers  (2023-05-01T20:04:46Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
  Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv  Detail & Related papers  (2022-07-20T04:55:18Z) - Label-Efficient Self-Supervised Federated Learning for Tackling Data
  Heterogeneity in Medical Imaging [23.08596805950814]
We present a robust and label-efficient self-supervised FL framework for medical image analysis.
Specifically, we introduce a novel distributed self-supervised pre-training paradigm into the existing FL pipeline.
We show that our self-supervised FL algorithm generalizes well to out-of-distribution data and learns federated models more effectively in limited label scenarios.
arXiv  Detail & Related papers  (2022-05-17T18:33:43Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
  Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv  Detail & Related papers  (2022-03-17T11:18:17Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
  Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv  Detail & Related papers  (2021-11-28T19:03:39Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.