GFPL: Generative Federated Prototype Learning for Resource-Constrained and Data-Imbalanced Vision Task
- URL: http://arxiv.org/abs/2602.21873v1
- Date: Wed, 25 Feb 2026 12:57:45 GMT
- Title: GFPL: Generative Federated Prototype Learning for Resource-Constrained and Data-Imbalanced Vision Task
- Authors: Shiwei Lu, Yuhang He, Jiashuo Li, Qiang Wang, Yihong Gong,
- Abstract summary: Federated learning (FL) facilitates the secure utilization of decentralized images.<n>FL faces two critical challenges in real-world deployment: ineffective knowledge fusion and prohibitive communication overhead.<n>We propose a novel Generative Federated Prototype Learning framework to address these issues.
- Score: 43.723840781330914
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated learning (FL) facilitates the secure utilization of decentralized images, advancing applications in medical image recognition and autonomous driving. However, conventional FL faces two critical challenges in real-world deployment: ineffective knowledge fusion caused by model updates biased toward majority-class features, and prohibitive communication overhead due to frequent transmissions of high-dimensional model parameters. Inspired by the human brain's efficiency in knowledge integration, we propose a novel Generative Federated Prototype Learning (GFPL) framework to address these issues. Within this framework, a prototype generation method based on Gaussian Mixture Model (GMM) captures the statistical information of class-wise features, while a prototype aggregation strategy using Bhattacharyya distance effectively fuses semantically similar knowledge across clients. In addition, these fused prototypes are leveraged to generate pseudo-features, thereby mitigating feature distribution imbalance across clients. To further enhance feature alignment during local training, we devise a dual-classifier architecture, optimized via a hybrid loss combining Dot Regression and Cross-Entropy. Extensive experiments on benchmarks show that GFPL improves model accuracy by 3.6% under imbalanced data settings while maintaining low communication cost.
Related papers
- Deep Leakage with Generative Flow Matching Denoiser [54.05993847488204]
We introduce a new deep leakage (DL) attack that integrates a generative Flow Matching (FM) prior into the reconstruction process.<n>Our approach consistently outperforms state-of-the-art attacks across pixel-level, perceptual, and feature-based similarity metrics.
arXiv Detail & Related papers (2026-01-21T14:51:01Z) - FedProtoKD: Dual Knowledge Distillation with Adaptive Class-wise Prototype Margin for Heterogeneous Federated Learning [18.44030373279699]
Prototype-based Heterogeneous Federated Learning (HFL) methods emerge as a promising solution to address statistical heterogeneity and privacy challenges.<n>We propose FedProtoKD in a Heterogeneous Federated Learning setting, using an enhanced dual-knowledge distillation mechanism to improve the system performance with clients' logits and prototype representation.<n>FedProtoKD achieved average improvements of 1.13% up to 34.13% accuracy across various settings and significantly outperforms existing state-of-the-art HFL methods.
arXiv Detail & Related papers (2025-08-26T13:14:29Z) - Robust Federated Learning on Edge Devices with Domain Heterogeneity [13.362209980631876]
Federated Learning (FL) allows collaborative training while ensuring data privacy across distributed edge devices.<n>We introduce a new framework to address this challenge by improving the generalization ability of the FL global model.<n>We introduce FedAPC, a prototype-based FL framework designed to enhance feature diversity and model robustness.
arXiv Detail & Related papers (2025-05-15T09:53:14Z) - HeteroTune: Efficient Federated Learning for Large Heterogeneous Models [35.53420882449293]
We propose HeteroTune, a novel federated fine-tuning paradigm for large, heterogeneous models operating under limited communication and budgets.<n>The core of our method lies in a novel architecture, DeMA, which enables flexible and efficient aggregation of heterogeneous models.<n>We provide both theoretical analysis and empirical evidence showing that HeteroTune achieves state-of-the-art performance and efficiency across diverse tasks and model architectures.
arXiv Detail & Related papers (2024-11-25T09:58:51Z) - Client Contribution Normalization for Enhanced Federated Learning [4.726250115737579]
Mobile devices, including smartphones and laptops, generate decentralized and heterogeneous data.
Federated Learning (FL) offers a promising alternative by enabling collaborative training of a global model across decentralized devices without data sharing.
This paper focuses on data-dependent heterogeneity in FL and proposes a novel approach leveraging mean latent representations extracted from locally trained models.
arXiv Detail & Related papers (2024-11-10T04:03:09Z) - Model Inversion Attacks Through Target-Specific Conditional Diffusion Models [54.69008212790426]
Model inversion attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications.
Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inherent flaws and biased optimization within latent space.
We propose Diffusion-based Model Inversion (Diff-MI) attacks to alleviate these issues.
arXiv Detail & Related papers (2024-07-16T06:38:49Z) - pFedAFM: Adaptive Feature Mixture for Batch-Level Personalization in Heterogeneous Federated Learning [34.01721941230425]
We propose a model-heterogeneous personalized Federated learning approach with Adaptive Feature Mixture (pFedAFM) for supervised learning tasks.
It significantly outperforms 7 state-of-the-art MHPFL methods, achieving up to 7.93% accuracy improvement.
arXiv Detail & Related papers (2024-04-27T09:52:59Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.