CLIP-guided Federated Learning on Heterogeneous and Long-Tailed Data
- URL: http://arxiv.org/abs/2312.08648v1
- Date: Thu, 14 Dec 2023 04:07:49 GMT
- Title: CLIP-guided Federated Learning on Heterogeneous and Long-Tailed Data
- Authors: Jiangming Shi, Shanshan Zheng, Xiangbo Yin, Yang Lu, Yuan Xie, Yanyun
Qu
- Abstract summary: Federated learning (FL) provides a decentralized machine learning paradigm where a server collaborates with a group of clients to learn a global model without accessing the clients' data.
We propose the CLIP-guided FL (CLIP2FL) method on heterogeneous and long-tailed data.
- Score: 25.56641696086199
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) provides a decentralized machine learning paradigm
where a server collaborates with a group of clients to learn a global model
without accessing the clients' data. User heterogeneity is a significant
challenge for FL, which together with the class-distribution imbalance further
enhances the difficulty of FL. Great progress has been made in large
vision-language models, such as Contrastive Language-Image Pre-training (CLIP),
which paves a new way for image classification and object recognition. Inspired
by the success of CLIP on few-shot and zero-shot learning, we use CLIP to
optimize the federated learning between server and client models under its
vision-language supervision. It is promising to mitigate the user heterogeneity
and class-distribution balance due to the powerful cross-modality
representation and rich open-vocabulary prior knowledge. In this paper, we
propose the CLIP-guided FL (CLIP2FL) method on heterogeneous and long-tailed
data. In CLIP2FL, the knowledge of the off-the-shelf CLIP model is transferred
to the client-server models, and a bridge is built between the client and
server. Specifically, for client-side learning, knowledge distillation is
conducted between client models and CLIP to improve the ability of client-side
feature representation. For server-side learning, in order to mitigate the
heterogeneity and class-distribution imbalance, we generate federated features
to retrain the server model. A prototype contrastive learning with the
supervision of the text encoder of CLIP is introduced to generate federated
features depending on the client-side gradients, and they are used to retrain a
balanced server classifier.
Related papers
- Personalized Federated Learning via Feature Distribution Adaptation [3.410799378893257]
Federated learning (FL) is a distributed learning framework that leverages commonalities between distributed client datasets to train a global model.
personalized federated learning (PFL) seeks to address this by learning individual models tailored to each client.
We propose an algorithm, pFedFDA, that efficiently generates personalized models by adapting global generative classifiers to their local feature distributions.
arXiv Detail & Related papers (2024-11-01T03:03:52Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - PeFLL: Personalized Federated Learning by Learning to Learn [16.161876130822396]
We present PeFLL, a new personalized federated learning algorithm that improves over the state-of-the-art in three aspects.
At the core of PeFLL lies a learning-to-learn approach that jointly trains an embedding network and a hypernetwork.
arXiv Detail & Related papers (2023-06-08T19:12:42Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - FedGEMS: Federated Learning of Larger Server Models via Selective
Knowledge Fusion [19.86388925556209]
Federated Learning (FL) has emerged as a viable solution to learn a global model while keeping data private.
In this work, we investigate a novel paradigm to take advantage of a powerful server model to break through model capacity in FL.
arXiv Detail & Related papers (2021-10-21T10:06:44Z) - Personalized Retrogress-Resilient Framework for Real-World Medical
Federated Learning [8.240098954377794]
We propose a personalized retrogress-resilient framework to produce a superior personalized model for each client.
Our experiments on real-world dermoscopic FL dataset prove that our personalized retrogress-resilient framework outperforms state-of-the-art FL methods.
arXiv Detail & Related papers (2021-10-01T13:24:29Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.