GC-Fed: Gradient Centralized Federated Learning with Partial Client Participation
- URL: http://arxiv.org/abs/2503.13180v2
- Date: Thu, 20 Mar 2025 08:41:33 GMT
- Title: GC-Fed: Gradient Centralized Federated Learning with Partial Client Participation
- Authors: Jungwon Seo, Ferhat Ozgur Catak, Chunming Rong, Kibeom Hong, Minhoe Kim,
- Abstract summary: Gradient Learning (FL) enables privacy-preserving multi-source information fusion (MSIF)<n>Many existing drift-mitigation strategies rely on reference-based techniques.<n>GC-Fed employs a hyperplane as a historically independent reference point to guide local training and enhance inter-client alignment.
- Score: 6.769127514113163
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) enables privacy-preserving multi-source information fusion (MSIF) but is challenged by client drift in highly heterogeneous data settings. Many existing drift-mitigation strategies rely on reference-based techniques--such as gradient adjustments or proximal loss--that use historical snapshots (e.g., past gradients or previous global models) as reference points. When only a subset of clients participates in each training round, these historical references may not accurately capture the overall data distribution, leading to unstable training. In contrast, our proposed Gradient Centralized Federated Learning (GC-Fed) employs a hyperplane as a historically independent reference point to guide local training and enhance inter-client alignment. GC-Fed comprises two complementary components: Local GC, which centralizes gradients during local training, and Global GC, which centralizes updates during server aggregation. In our hybrid design, Local GC is applied to feature-extraction layers to harmonize client contributions, while Global GC refines classifier layers to stabilize round-wise performance. Theoretical analysis and extensive experiments on benchmark FL tasks demonstrate that GC-Fed effectively mitigates client drift and achieves up to a 20% improvement in accuracy under heterogeneous and partial participation conditions.
Related papers
- Hierarchical Federated Learning with Multi-Timescale Gradient Correction [24.713834338757195]
In this paper, we propose a multi-time correction (MTGC) methodology to resolve this issue.<n>Our key idea is to introduce distinct control to (i) correct the client gradient the group gradient, i.e., to reduce client model drift caused by local updates based on individual datasets.
arXiv Detail & Related papers (2024-09-27T05:10:05Z) - Accelerating Federated Learning by Selecting Beneficial Herd of Local Gradients [40.84399531998246]
Federated Learning (FL) is a distributed machine learning framework in communication network systems.
Non-Independent and Identically Distributed (Non-IID) data negatively affect the convergence efficiency of the global model.
We propose the BHerd strategy which selects a beneficial herd of local gradients to accelerate the convergence of the FL model.
arXiv Detail & Related papers (2024-03-25T09:16:59Z) - FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - FedLoGe: Joint Local and Generic Federated Learning under Long-tailed
Data [46.29190753993415]
Federated Long-Tailed Learning (Fed-LT) is a paradigm wherein data collected from decentralized local clients manifests a globally prevalent long-tailed distribution.
This paper introduces an approach termed Federated Local and Generic Model Training in Fed-LT (FedLoGe), which enhances both local and generic model performance.
arXiv Detail & Related papers (2024-01-17T05:04:33Z) - Fed-GraB: Federated Long-tailed Learning with Self-Adjusting Gradient
Balancer [47.82735112096587]
This paper investigates a federated long-tailed learning (Fed-LT) task in which each client holds a locally heterogeneous dataset.
We propose a method termed $textttFed-GraB$, comprised of a Self-Natural Gradient Balancer (SGB) module.
We show that $textttFed-GraB$ achieves state-of-the-art performance on representative datasets such as CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, and iist.
arXiv Detail & Related papers (2023-10-11T15:28:39Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - FedCME: Client Matching and Classifier Exchanging to Handle Data
Heterogeneity in Federated Learning [5.21877373352943]
Data heterogeneity across clients is one of the key challenges in Federated Learning (FL)
We propose a novel FL framework named FedCME by client matching and classifier exchanging.
Experimental results demonstrate that FedCME performs better than FedAvg, FedProx, MOON and FedRS on popular federated learning benchmarks.
arXiv Detail & Related papers (2023-07-17T15:40:45Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - SphereFed: Hyperspherical Federated Learning [22.81101040608304]
Key challenge is the handling of non-i.i.d. data across multiple clients.
We introduce the Hyperspherical Federated Learning (SphereFed) framework to address the non-i.i.d. issue.
We show that the calibration solution can be computed efficiently and distributedly without direct access of local data.
arXiv Detail & Related papers (2022-07-19T17:13:06Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.