FedAnchor: Enhancing Federated Semi-Supervised Learning with Label
Contrastive Loss for Unlabeled Clients
- URL: http://arxiv.org/abs/2402.10191v1
- Date: Thu, 15 Feb 2024 18:48:21 GMT
- Title: FedAnchor: Enhancing Federated Semi-Supervised Learning with Label
Contrastive Loss for Unlabeled Clients
- Authors: Xinchi Qiu, Yan Gao, Lorenzo Sani, Heng Pan, Wanru Zhao, Pedro P. B.
Gusmao, Mina Alibeigi, Alex Iacob, Nicholas D. Lane
- Abstract summary: Federated learning (FL) is a distributed learning paradigm that facilitates collaborative training of a shared global model across devices.
We propose FedAnchor, an innovative FSSL method that introduces a unique double-head structure, called anchor head, paired with the classification head trained exclusively on labeled anchor data on the server.
Our approach mitigates the confirmation bias and overfitting issues associated with pseudo-labeling techniques based on high-confidence model prediction samples.
- Score: 19.3885479917635
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a distributed learning paradigm that facilitates
collaborative training of a shared global model across devices while keeping
data localized. The deployment of FL in numerous real-world applications faces
delays, primarily due to the prevalent reliance on supervised tasks. Generating
detailed labels at edge devices, if feasible, is demanding, given resource
constraints and the imperative for continuous data updates. In addressing these
challenges, solutions such as federated semi-supervised learning (FSSL), which
relies on unlabeled clients' data and a limited amount of labeled data on the
server, become pivotal. In this paper, we propose FedAnchor, an innovative FSSL
method that introduces a unique double-head structure, called anchor head,
paired with the classification head trained exclusively on labeled anchor data
on the server. The anchor head is empowered with a newly designed label
contrastive loss based on the cosine similarity metric. Our approach mitigates
the confirmation bias and overfitting issues associated with pseudo-labeling
techniques based on high-confidence model prediction samples. Extensive
experiments on CIFAR10/100 and SVHN datasets demonstrate that our method
outperforms the state-of-the-art method by a significant margin in terms of
convergence rate and model accuracy.
Related papers
- Overcoming label shift in targeted federated learning [8.223143536605248]
Federated learning enables multiple actors to collaboratively train models without sharing private data.
One common violation is label shift, where the label distributions differ across clients or between clients and the target domain.
We propose FedPALS, a novel model aggregation scheme that adapts to label shifts by leveraging knowledge of the target label distribution at the central server.
arXiv Detail & Related papers (2024-11-06T09:52:45Z) - Optimizing Federated Learning by Entropy-Based Client Selection [13.851391819710367]
Deep learning domains typically require an extensive amount of data for optimal performance.
FedOptEnt is designed to mitigate performance issues caused by label distribution skew.
The proposed method outperforms several state-of-the-art algorithms by up to 6% in classification accuracy.
arXiv Detail & Related papers (2024-11-02T13:31:36Z) - (FL)$^2$: Overcoming Few Labels in Federated Semi-Supervised Learning [4.803231218533992]
Federated Learning (FL) is a distributed machine learning framework that trains accurate global models while preserving clients' privacy-sensitive data.
Most FL approaches assume that clients possess labeled data, which is often not the case in practice.
We propose $(FL)2$, a robust training method for unlabeled clients using sharpness-aware consistency regularization.
arXiv Detail & Related papers (2024-10-30T17:15:02Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Federated Learning with Instance-Dependent Noisy Label [6.093214616626228]
FedBeat aims to build a global statistically consistent classifier using the IDN transition matrix (IDNTM)
Experiments conducted on CIFAR-10 and SVHN verify that the proposed method significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-16T05:08:02Z) - Navigating Data Heterogeneity in Federated Learning A Semi-Supervised
Federated Object Detection [3.7398615061365206]
Federated Learning (FL) has emerged as a potent framework for training models across distributed data sources.
It faces challenges with limited high-quality labels and non-IID client data, particularly in applications like autonomous driving.
We present a pioneering SSFOD framework, designed for scenarios where labeled data reside only at the server while clients possess unlabeled data.
arXiv Detail & Related papers (2023-10-26T01:40:28Z) - ProtoCon: Pseudo-label Refinement via Online Clustering and Prototypical
Consistency for Efficient Semi-supervised Learning [60.57998388590556]
ProtoCon is a novel method for confidence-based pseudo-labeling.
Online nature of ProtoCon allows it to utilise the label history of the entire dataset in one training cycle.
It delivers significant gains and faster convergence over state-of-the-art datasets.
arXiv Detail & Related papers (2023-03-22T23:51:54Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Self-Tuning for Data-Efficient Deep Learning [75.34320911480008]
Self-Tuning is a novel approach to enable data-efficient deep learning.
It unifies the exploration of labeled and unlabeled data and the transfer of a pre-trained model.
It outperforms its SSL and TL counterparts on five tasks by sharp margins.
arXiv Detail & Related papers (2021-02-25T14:56:19Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.