Language-Guided Transformer for Federated Multi-Label Classification
- URL: http://arxiv.org/abs/2312.07165v1
- Date: Tue, 12 Dec 2023 11:03:51 GMT
- Title: Language-Guided Transformer for Federated Multi-Label Classification
- Authors: I-Jieh Liu, Ci-Siang Lin, Fu-En Yang, Yu-Chiang Frank Wang
- Abstract summary: Federated Learning (FL) enables multiple users to collaboratively train a robust model in a privacy-preserving manner without sharing their private data.
Most existing approaches of FL only consider traditional single-label image classification, ignoring the impact when transferring the task to multi-label image classification.
We propose a novel FL framework of Language-Guided Transformer (FedLGT) to tackle this challenging task, which aims to exploit and transfer knowledge across different clients for learning a robust global model.
- Score: 32.26913287627532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is an emerging paradigm that enables multiple users
to collaboratively train a robust model in a privacy-preserving manner without
sharing their private data. Most existing approaches of FL only consider
traditional single-label image classification, ignoring the impact when
transferring the task to multi-label image classification. Nevertheless, it is
still challenging for FL to deal with user heterogeneity in their local data
distribution in the real-world FL scenario, and this issue becomes even more
severe in multi-label image classification. Inspired by the recent success of
Transformers in centralized settings, we propose a novel FL framework for
multi-label classification. Since partial label correlation may be observed by
local clients during training, direct aggregation of locally updated models
would not produce satisfactory performances. Thus, we propose a novel FL
framework of Language-Guided Transformer (FedLGT) to tackle this challenging
task, which aims to exploit and transfer knowledge across different clients for
learning a robust global model. Through extensive experiments on various
multi-label datasets (e.g., FLAIR, MS-COCO, etc.), we show that our FedLGT is
able to achieve satisfactory performance and outperforms standard FL techniques
under multi-label FL scenarios. Code is available at
https://github.com/Jack24658735/FedLGT.
Related papers
- Recovering Global Data Distribution Locally in Federated Learning [7.885010255812708]
Federated Learning (FL) is a distributed machine learning paradigm that enables collaboration among multiple clients.
A major challenge in FL is the label imbalance, where clients may exclusively possess certain classes while having numerous minority and missing classes.
We propose a novel approach ReGL to address this challenge, whose key idea is to Recover the Global data distribution Locally.
arXiv Detail & Related papers (2024-09-21T08:35:04Z) - FLea: Addressing Data Scarcity and Label Skew in Federated Learning via Privacy-preserving Feature Augmentation [15.298650496155508]
Federated Learning (FL) enables model development by leveraging data distributed across numerous edge devices without transferring local data to a central server.
Existing FL methods face challenges when dealing with scarce and label-skewed data across devices, resulting in local model overfitting and drift.
We propose a pioneering framework called FLea, incorporating the following key components.
arXiv Detail & Related papers (2024-06-13T19:28:08Z) - Unlocking the Potential of Prompt-Tuning in Bridging Generalized and
Personalized Federated Learning [49.72857433721424]
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks.
We present a novel algorithm, SGPT, that integrates Generalized FL (GFL) and Personalized FL (PFL) approaches by employing a unique combination of both shared and group-specific prompts.
arXiv Detail & Related papers (2023-10-27T17:22:09Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Addressing Heterogeneity in Federated Learning via Distributional
Transformation [37.99565338024758]
Federated learning (FL) allows multiple clients to collaboratively train a deep learning model.
One major challenge of FL is when data distribution is heterogeneous, i.e., differs from one client to another.
We propose a novel framework, called DisTrans, to improve FL performance (i.e., model accuracy) via train and test-time distributional transformations.
arXiv Detail & Related papers (2022-10-26T20:42:01Z) - Federated Learning from Only Unlabeled Data with
Class-Conditional-Sharing Clients [98.22390453672499]
Supervised federated learning (FL) enables multiple clients to share the trained model without sharing their labeled data.
We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients.
arXiv Detail & Related papers (2022-04-07T09:12:00Z) - Multi-Source Domain Adaptation Based on Federated Knowledge Alignment [0.0]
Federated Learning (FL) facilitates distributed model learning to protect users' privacy.
We propose Federated Knowledge Alignment (FedKA) that aligns features from different clients and those of the target task.
arXiv Detail & Related papers (2022-03-22T11:42:25Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Multi-Center Federated Learning [62.32725938999433]
Federated learning (FL) can protect data privacy in distributed learning.
It merely collects local gradients from users without access to their data.
We propose a novel multi-center aggregation mechanism.
arXiv Detail & Related papers (2021-08-19T12:20:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.