Exploring Vacant Classes in Label-Skewed Federated Learning
- URL: http://arxiv.org/abs/2401.02329v3
- Date: Mon, 16 Dec 2024 15:42:53 GMT
- Title: Exploring Vacant Classes in Label-Skewed Federated Learning
- Authors: Kuangpu Guo, Yuhe Ding, Jian Liang, Ran He, Zilei Wang, Tieniu Tan,
- Abstract summary: This paper introduces FedVLS, a novel approach to label-skewed federated learning.
It integrates vacant-class distillation and logit suppression simultaneously.
Experiments validate the efficacy of FedVLS, demonstrating superior performance compared to previous state-of-the-art (SOTA) methods.
- Score: 113.65301899666645
- License:
- Abstract: Label skews, characterized by disparities in local label distribution across clients, pose a significant challenge in federated learning. As minority classes suffer from worse accuracy due to overfitting on local imbalanced data, prior methods often incorporate class-balanced learning techniques during local training. Although these methods improve the mean accuracy across all classes, we observe that vacant classes-referring to categories absent from a client's data distribution-remain poorly recognized. Besides, there is still a gap in the accuracy of local models on minority classes compared to the global model. This paper introduces FedVLS, a novel approach to label-skewed federated learning that integrates both vacant-class distillation and logit suppression simultaneously. Specifically, vacant-class distillation leverages knowledge distillation during local training on each client to retain essential information related to vacant classes from the global model. Moreover, logit suppression directly penalizes network logits for non-label classes, effectively addressing misclassifications in minority classes that may be biased toward majority classes. Extensive experiments validate the efficacy of FedVLS, demonstrating superior performance compared to previous state-of-the-art (SOTA) methods across diverse datasets with varying degrees of label skews. Our code is available at https://github.com/krumpguo/FedVLS.
Related papers
- Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition [50.61991746981703]
Current state-of-the-art LTSSL approaches rely on high-quality pseudo-labels for large-scale unlabeled data.
This paper introduces a novel probabilistic framework that unifies various recent proposals in long-tail learning.
We introduce a continuous contrastive learning method, CCL, extending our framework to unlabeled data using reliable and smoothed pseudo-labels.
arXiv Detail & Related papers (2024-10-08T15:06:10Z) - Recovering Global Data Distribution Locally in Federated Learning [7.885010255812708]
Federated Learning (FL) is a distributed machine learning paradigm that enables collaboration among multiple clients.
A major challenge in FL is the label imbalance, where clients may exclusively possess certain classes while having numerous minority and missing classes.
We propose a novel approach ReGL to address this challenge, whose key idea is to Recover the Global data distribution Locally.
arXiv Detail & Related papers (2024-09-21T08:35:04Z) - Federated Skewed Label Learning with Logits Fusion [23.062650578266837]
Federated learning (FL) aims to collaboratively train a shared model across multiple clients without transmitting their local data.
We propose FedBalance, which corrects the optimization bias among local models by calibrating their logits.
Our method can gain 13% higher average accuracy compared with state-of-the-art methods.
arXiv Detail & Related papers (2023-11-14T14:37:33Z) - Addressing Class Variable Imbalance in Federated Semi-supervised
Learning [10.542178602467885]
We propose Federated Semi-supervised Learning for Class Variable Imbalance (FCVI) to solve class variable imbalance.
FCVI is used to mitigate the data imbalance due to changes of the number of classes.
Our scheme is proved to be significantly better than baseline methods, while maintaining client privacy.
arXiv Detail & Related papers (2023-03-21T12:50:17Z) - On Non-Random Missing Labels in Semi-Supervised Learning [114.62655062520425]
Semi-Supervised Learning (SSL) is fundamentally a missing label problem.
We explicitly incorporate "class" into SSL.
Our method not only significantly outperforms existing baselines but also surpasses other label bias removal SSL methods.
arXiv Detail & Related papers (2022-06-29T22:01:29Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - PLM: Partial Label Masking for Imbalanced Multi-label Classification [59.68444804243782]
Neural networks trained on real-world datasets with long-tailed label distributions are biased towards frequent classes and perform poorly on infrequent classes.
We propose a method, Partial Label Masking (PLM), which utilizes this ratio during training.
Our method achieves strong performance when compared to existing methods on both multi-label (MultiMNIST and MSCOCO) and single-label (imbalanced CIFAR-10 and CIFAR-100) image classification datasets.
arXiv Detail & Related papers (2021-05-22T18:07:56Z) - Federated Learning with Only Positive Labels [71.63836379169315]
We propose a generic framework for training with only positive labels, namely Federated Averaging with Spreadout (FedAwS)
We show, both theoretically and empirically, that FedAwS can almost match the performance of conventional learning where users have access to negative labels.
arXiv Detail & Related papers (2020-04-21T23:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.