Heterogeneous Federated Learning with Prototype Alignment and Upscaling
- URL: http://arxiv.org/abs/2507.04310v1
- Date: Sun, 06 Jul 2025 09:34:41 GMT
- Title: Heterogeneous Federated Learning with Prototype Alignment and Upscaling
- Authors: Gyuejeong Lee, Jihwan Shin, Daeyoung Choi,
- Abstract summary: Prototype Normalization (ProtoNorm) is a novel PBFL framework that addresses suboptimal prototype separation.<n>We show that our approach better separates prototypes and thus consistently outperforms existing HtFL approaches.
- Score: 0.7373617024876724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Heterogeneity in data distributions and model architectures remains a significant challenge in federated learning (FL). Various heterogeneous FL (HtFL) approaches have recently been proposed to address this challenge. Among them, prototype-based FL (PBFL) has emerged as a practical framework that only shares per-class mean activations from the penultimate layer. However, PBFL approaches often suffer from suboptimal prototype separation, limiting their discriminative power. We propose Prototype Normalization (ProtoNorm), a novel PBFL framework that addresses this limitation through two key components: Prototype Alignment (PA) and Prototype Upscaling (PU). The PA method draws inspiration from the Thomson problem in classical physics, optimizing global prototype configurations on a unit sphere to maximize angular separation; subsequently, the PU method increases prototype magnitudes to enhance separation in Euclidean space. Extensive evaluations on benchmark datasets show that our approach better separates prototypes and thus consistently outperforms existing HtFL approaches. Notably, since ProtoNorm inherits the communication efficiency of PBFL and the PA is performed server-side, it is particularly suitable for resource-constrained environments.
Related papers
- Proto-EVFL: Enhanced Vertical Federated Learning via Dual Prototype with Extremely Unaligned Data [28.626677790020082]
In vertical federated learning (VFL), unaligned samples across different parties in VFL can be extremely class-imbalanced.<n>We propose Proto-EVFL, an enhanced VFL framework via dual prototypes.<n>We prove that Proto-EVFL, as the first bi-level optimization framework in VFL, has a convergence rate of 1/sqrt T.
arXiv Detail & Related papers (2025-07-30T08:48:33Z) - TinyProto: Communication-Efficient Federated Learning with Sparse Prototypes in Resource-Constrained Environments [0.8287206589886879]
Communication efficiency in federated learning (FL) remains a critical challenge for resource-constrained environments.<n>We propose TinyProto, which addresses these limitations through Class-wise Prototype Sparsification (CPS) and adaptive prototype scaling.<n>Our experiments show TinyProto reduces communication costs by up to 4x compared to existing methods while maintaining performance.
arXiv Detail & Related papers (2025-07-06T10:24:33Z) - Robust Federated Learning on Edge Devices with Domain Heterogeneity [13.362209980631876]
Federated Learning (FL) allows collaborative training while ensuring data privacy across distributed edge devices.<n>We introduce a new framework to address this challenge by improving the generalization ability of the FL global model.<n>We introduce FedAPC, a prototype-based FL framework designed to enhance feature diversity and model robustness.
arXiv Detail & Related papers (2025-05-15T09:53:14Z) - FedORGP: Guiding Heterogeneous Federated Learning with Orthogonality Regularization on Global Prototypes [31.93057335216804]
Federated Learning (FL) has emerged as an essential framework for distributed machine learning.<n>Current approaches face limitations in achieving separation between classes.<n>This paper introduces FedtFLORG, which encourages intra-class prototype similarity and expands the inter-class angular separation.
arXiv Detail & Related papers (2025-02-22T07:02:51Z) - Sequential Compression Layers for Efficient Federated Learning in Foundational Models [2.6733991338938026]
We propose a novel, simple, and more effective parameter-efficient fine-tuning method that does not rely on LoRA.<n>This solution addresses the bottlenecks associated with LoRA in federated fine tuning and outperforms recent LoRA-based approaches.
arXiv Detail & Related papers (2024-12-09T22:06:47Z) - A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs [57.35402286842029]
We propose a novel Aligned Dual Dual (A-FedPD) method, which constructs virtual dual align global and local clients.<n>We provide a comprehensive analysis of the A-FedPD method's efficiency for those protracted unicipated security consensus.
arXiv Detail & Related papers (2024-09-27T17:00:32Z) - Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences [49.14535254003683]
We introduce PaLoRA, a novel parameter-efficient method that addresses multi-task trade-offs in machine learning.<n>Our experiments show that PaLoRA outperforms state-of-the-art MTL and PFL baselines across various datasets.
arXiv Detail & Related papers (2024-07-10T21:25:51Z) - PPFL: A Personalized Federated Learning Framework for Heterogeneous Population [27.93326556470552]
Personalization aims to characterize individual preferences and is widely applied across many fields.<n>Conventional personalized methods operate in a centralized manner, potentially exposing raw data when pooling individual information.<n>We develop a flexible and interpretable personalized framework within the paradigm of a Population Personalized Federated Learning.
arXiv Detail & Related papers (2023-10-22T16:06:27Z) - Vertical Semi-Federated Learning for Efficient Online Advertising [50.18284051956359]
Semi-VFL (Vertical Semi-Federated Learning) is proposed to achieve a practical industry application fashion for VFL.
We build an inference-efficient single-party student model applicable to the whole sample space.
New representation distillation methods are designed to extract cross-party feature correlations for both the overlapped and non-overlapped data.
arXiv Detail & Related papers (2022-09-30T17:59:27Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z) - Hybrid Federated Learning: Algorithms and Implementation [61.0640216394349]
Federated learning (FL) is a recently proposed distributed machine learning paradigm dealing with distributed and private data sets.
We propose a new model-matching-based problem formulation for hybrid FL.
We then propose an efficient algorithm that can collaboratively train the global and local models to deal with full and partial featured data.
arXiv Detail & Related papers (2020-12-22T23:56:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.