Geometric Prior-Guided Federated Prompt Calibration
- URL: http://arxiv.org/abs/2512.07208v1
- Date: Mon, 08 Dec 2025 06:42:32 GMT
- Title: Geometric Prior-Guided Federated Prompt Calibration
- Authors: Fei Luo, Ziwei Zhao, Mingxuan Wang, Duoyang Li, Zhe Qian, Jiayi Tuo, Chenyue Zhou, Yanbiao Ma,
- Abstract summary: Federated Prompt Learning (FPL) offers a parameter-efficient solution for collaboratively training large models.<n>Existing methods, focusing on aggregation or regularization, fail to address this root cause of local training bias.<n>We propose Geometry-Guided Text Prompt (GGTPC), a novel framework that directly corrects this bias by providing clients with a global geometric prior.
- Score: 21.766231067185956
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Prompt Learning (FPL) offers a parameter-efficient solution for collaboratively training large models, but its performance is severely hindered by data heterogeneity, which causes locally trained prompts to become biased. Existing methods, focusing on aggregation or regularization, fail to address this root cause of local training bias. To this end, we propose Geometry-Guided Text Prompt Calibration (GGTPC), a novel framework that directly corrects this bias by providing clients with a global geometric prior. This prior, representing the shape of the global data distribution derived from the covariance matrix, is reconstructed on the server in a privacy-preserving manner. Clients then use a novel Geometry-Prior Calibration Layer (GPCL) to align their local feature distributions with this global prior during training. Extensive experiments show GGTPC's effectiveness. On the label-skewed CIFAR-100 dataset ($β$=0.1), it outperforms the state-of-the-art by 2.15\%. Under extreme skew ($β$=0.01), it improves upon the baseline by 9.17\%. Furthermore, as a plug-and-play module on the domain-skewed Office-Home dataset, it boosts FedAvg's performance by 4.60\%. These results demonstrate that GGTPC effectively mitigates data heterogeneity by correcting the fundamental local training bias, serving as a versatile module to enhance various FL algorithms.
Related papers
- Optimal Transport-based Domain Alignment as a Preprocessing Step for Federated Learning [0.48342038441006796]
Federated learning (FL) is a subfield of machine learning that avoids sharing local data with a central server.<n>In FL, fusing locally-trained models with unbalanced datasets may deteriorate the performance of global model aggregation.<n>We introduce an Optimal Transport-based preprocessing algorithm that aligns the datasets by minimizing the distributional discrepancy of data along the edge devices.
arXiv Detail & Related papers (2025-06-04T15:35:55Z) - FedHL: Federated Learning for Heterogeneous Low-Rank Adaptation via Unbiased Aggregation [6.5370850242187855]
Federated Learning (FL) facilitates the fine-tuning of Foundation Models (FMs) using distributed data sources.<n>Low-Rank Adaptation (LoRA) gaining popularity due to its low communication costs and strong performance.<n>Existing methods lack formal convergence guarantees due to parameter truncation and biased gradient updates.
arXiv Detail & Related papers (2025-05-24T04:12:12Z) - SMaRt: Improving GANs with Score Matching Regularity [114.43433222721025]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.<n>We find that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.<n>We show that our approach can consistently boost the performance of various state-of-the-art GANs on real-world datasets with pre-trained diffusion models acting as the approximate score function.
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - Fed-GraB: Federated Long-tailed Learning with Self-Adjusting Gradient
Balancer [47.82735112096587]
This paper investigates a federated long-tailed learning (Fed-LT) task in which each client holds a locally heterogeneous dataset.
We propose a method termed $textttFed-GraB$, comprised of a Self-Natural Gradient Balancer (SGB) module.
We show that $textttFed-GraB$ achieves state-of-the-art performance on representative datasets such as CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, and iist.
arXiv Detail & Related papers (2023-10-11T15:28:39Z) - Locally Adaptive Federated Learning [30.19411641685853]
Federated learning is a paradigm of distributed machine learning in which multiple clients coordinate with a central server to learn a model.
Standard federated optimization methods such as Federated Averaging (FedAvg) ensure generalization among the clients.
We propose locally federated learning algorithms, that leverage the local geometric information for each client function.
arXiv Detail & Related papers (2023-07-12T17:02:32Z) - PerAda: Parameter-Efficient Federated Learning Personalization with Generalization Guarantees [95.87604231887353]
Existing pFL methods introduce high communication and computation costs or are vulnerable to test communication.
In PerAda, a parameter distillation and pFL pFL has superior performance, especially under test-time distribution.
Our code is available at https://github.com/NV/PerAda.
arXiv Detail & Related papers (2023-02-13T19:00:37Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Heterogeneous Federated Learning via Grouped Sequential-to-Parallel
Training [60.892342868936865]
Federated learning (FL) is a rapidly growing privacy-preserving collaborative machine learning paradigm.
We propose a data heterogeneous-robust FL approach, FedGSP, to address this challenge.
We show that FedGSP improves the accuracy by 3.7% on average compared with seven state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-31T03:15:28Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Tackling the Local Bias in Federated Graph Learning [48.887310972708036]
In Federated graph learning (FGL), a global graph is distributed across different clients, where each client holds a subgraph.
Existing FGL methods fail to effectively utilize cross-client edges, losing structural information during the training.
We propose a novel FGL framework to make the local models similar to the model trained in a centralized setting.
arXiv Detail & Related papers (2021-10-22T08:22:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.