Cooperative Pseudo Labeling for Unsupervised Federated Classification
- URL: http://arxiv.org/abs/2510.10100v1
- Date: Sat, 11 Oct 2025 08:18:26 GMT
- Title: Cooperative Pseudo Labeling for Unsupervised Federated Classification
- Authors: Kuangpu Guo, Lijun Sheng, Yongcan Yu, Jian Liang, Zilei Wang, Ran He,
- Abstract summary: Unsupervised Federated Learning (UFL) aims to collaboratively train a global model across distributed clients without sharing data or accessing label information.<n>We propose a novel method, underlinetextbfFederated underlinetextbfCooperative underlinetextbfPseudo underlinetextbfLabeling (textbfFedCoPL)<n>In particular, visual prompts containing general image features are aggregated at the server, while text prompts encoding personalized knowledge are retained locally.
- Score: 62.9387841396335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised Federated Learning (UFL) aims to collaboratively train a global model across distributed clients without sharing data or accessing label information. Previous UFL works have predominantly focused on representation learning and clustering tasks. Recently, vision language models (e.g., CLIP) have gained significant attention for their powerful zero-shot prediction capabilities. Leveraging this advancement, classification problems that were previously infeasible under the UFL paradigm now present promising new opportunities, yet remain largely unexplored. In this paper, we extend UFL to the classification problem with CLIP for the first time and propose a novel method, \underline{\textbf{Fed}}erated \underline{\textbf{Co}}operative \underline{\textbf{P}}seudo \underline{\textbf{L}}abeling (\textbf{FedCoPL}). Specifically, clients estimate and upload their pseudo label distribution, and the server adjusts and redistributes them to avoid global imbalance among classes. Moreover, we introduce a partial prompt aggregation protocol for effective collaboration and personalization. In particular, visual prompts containing general image features are aggregated at the server, while text prompts encoding personalized knowledge are retained locally. Extensive experiments demonstrate the superior performance of our FedCoPL compared to baseline methods. Our code is available at \href{https://github.com/krumpguo/FedCoPL}{https://github.com/krumpguo/FedCoPL}.
Related papers
- FedAPT: Federated Adversarial Prompt Tuning for Vision-Language Models [97.35577473867296]
Federated Adversarial Prompt Tuning (textbfFedAPT) is a novel method designed to enhance the adversarial robustness of FPT.<n>To address this issue, we propose a textbfclass-aware prompt generator that generates visual prompts from text prompts.<n>Experiments on multiple image classification datasets demonstrate the superiority of FedAPT in improving adversarial robustness.
arXiv Detail & Related papers (2025-09-03T03:46:35Z) - Curriculum Guided Personalized Subgraph Federated Learning [8.721619913104899]
Subgraph Federated Learning (FL) aims to train Graph Neural Networks (GNNs) across distributed private subgraphs.<n> weighted model aggregation personalizes each local GNN by assigning larger weights to parameters from clients with similar subgraph characteristics.<n>We propose a novel personalized subgraph FL framework called Curriculum guided personalized sUbgraph Federated Learning (CUFL)
arXiv Detail & Related papers (2025-08-30T08:01:36Z) - FedBM: Stealing Knowledge from Pre-trained Language Models for Heterogeneous Federated Learning [33.84409350929454]
We propose a novel framework called Federated Bias eliMinating (FedBM) to get rid of local learning bias in heterogeneous learning (FL)<n>FedBM consists of two modules, i.e., Linguistic Knowledge-based Construction (LKCC) and Concept-guided Global Distribution Estimation (CGDE)
arXiv Detail & Related papers (2025-02-24T04:35:48Z) - Personalized federated learning based on feature fusion [2.943623084019036]
Federated learning enables distributed clients to collaborate on training while storing their data locally to protect client privacy.
We propose a personalized federated learning approach called pFedPM.
In our process, we replace traditional gradient uploading with feature uploading, which helps reduce communication costs and allows for heterogeneous client models.
arXiv Detail & Related papers (2024-06-24T12:16:51Z) - Realistic Unsupervised CLIP Fine-tuning with Universal Entropy Optimization [101.08992036691673]
This paper explores a realistic unsupervised fine-tuning scenario, considering the presence of out-of-distribution samples from unknown classes.
In particular, we focus on simultaneously enhancing out-of-distribution detection and the recognition of instances associated with known classes.
We present a simple, efficient, and effective approach called Universal Entropy Optimization (UEO)
arXiv Detail & Related papers (2023-08-24T16:47:17Z) - Improving Zero-Shot Generalization for CLIP with Synthesized Prompts [135.4317555866831]
Most existing methods require labeled data for all classes, which may not hold in real-world applications.
We propose a plug-and-play generative approach called textbfSynttextbfHestextbfIzed textbfPrompts(textbfSHIP) to improve existing fine-tuning methods.
arXiv Detail & Related papers (2023-07-14T15:15:45Z) - ProtoCon: Pseudo-label Refinement via Online Clustering and Prototypical
Consistency for Efficient Semi-supervised Learning [60.57998388590556]
ProtoCon is a novel method for confidence-based pseudo-labeling.
Online nature of ProtoCon allows it to utilise the label history of the entire dataset in one training cycle.
It delivers significant gains and faster convergence over state-of-the-art datasets.
arXiv Detail & Related papers (2023-03-22T23:51:54Z) - Fusion of Global and Local Knowledge for Personalized Federated Learning [75.20751492913892]
In this paper, we explore personalized models with low-rank and sparse decomposition.
We propose a two-stage-based algorithm named textbfFederated learning with mixed textbfSparse and textbfRank representation.
Under proper assumptions, we show that the GKR trained by FedSLR can at least sub-linearly converge to a stationary point of the regularized problem.
arXiv Detail & Related papers (2023-02-21T23:09:45Z) - Learning Across Domains and Devices: Style-Driven Source-Free Domain
Adaptation in Clustered Federated Learning [32.098954477227046]
We propose a novel task in which the clients' data is unlabeled and the server accesses a source labeled dataset for pre-training only.
Our experiments show that our algorithm is able to efficiently tackle the new task outperforming existing approaches.
arXiv Detail & Related papers (2022-10-05T15:23:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.