Federated Learning with Only Positive Labels
- URL: http://arxiv.org/abs/2004.10342v1
- Date: Tue, 21 Apr 2020 23:35:02 GMT
- Title: Federated Learning with Only Positive Labels
- Authors: Felix X. Yu, Ankit Singh Rawat, Aditya Krishna Menon, Sanjiv Kumar
- Abstract summary: We propose a generic framework for training with only positive labels, namely Federated Averaging with Spreadout (FedAwS)
We show, both theoretically and empirically, that FedAwS can almost match the performance of conventional learning where users have access to negative labels.
- Score: 71.63836379169315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider learning a multi-class classification model in the federated
setting, where each user has access to the positive data associated with only a
single class. As a result, during each federated learning round, the users need
to locally update the classifier without having access to the features and the
model parameters for the negative classes. Thus, naively employing conventional
decentralized learning such as the distributed SGD or Federated Averaging may
lead to trivial or extremely poor classifiers. In particular, for the embedding
based classifiers, all the class embeddings might collapse to a single point.
To address this problem, we propose a generic framework for training with
only positive labels, namely Federated Averaging with Spreadout (FedAwS), where
the server imposes a geometric regularizer after each round to encourage
classes to be spreadout in the embedding space. We show, both theoretically and
empirically, that FedAwS can almost match the performance of conventional
learning where users have access to negative labels. We further extend the
proposed method to the settings with large output spaces.
Related papers
- Federated Learning with Only Positive Labels by Exploring Label Correlations [78.59613150221597]
Federated learning aims to collaboratively learn a model by using the data from multiple users under privacy constraints.
In this paper, we study the multi-label classification problem under the federated learning setting.
We propose a novel and generic method termed Federated Averaging by exploring Label Correlations (FedALC)
arXiv Detail & Related papers (2024-04-24T02:22:50Z) - Not all Minorities are Equal: Empty-Class-Aware Distillation for
Heterogeneous Federated Learning [120.42853706967188]
FedED integrates empty-class distillation and logit suppression simultaneously.
It addresses misclassifications in minority classes that may be biased toward majority classes.
arXiv Detail & Related papers (2024-01-04T16:06:31Z) - DiGeo: Discriminative Geometry-Aware Learning for Generalized Few-Shot
Object Detection [39.937724871284665]
Generalized few-shot object detection aims to achieve precise detection on both base classes with abundant annotations and novel classes with limited training data.
Existing approaches enhance few-shot generalization with the sacrifice of base-class performance.
We propose a new training framework, DiGeo, to learn Geometry-aware features of inter-class separation and intra-class compactness.
arXiv Detail & Related papers (2023-03-16T22:37:09Z) - Evolving Multi-Label Fuzzy Classifier [5.53329677986653]
Multi-label classification has attracted much attention in the machine learning community to address the problem of assigning single samples to more than one class at the same time.
We propose an evolving multi-label fuzzy classifier (EFC-ML) which is able to self-adapt and self-evolve its structure with new incoming multi-label samples in an incremental, single-pass manner.
arXiv Detail & Related papers (2022-03-29T08:01:03Z) - Towards Unbiased Multi-label Zero-Shot Learning with Pyramid and
Semantic Attention [14.855116554722489]
Multi-label zero-shot learning aims at recognizing multiple unseen labels of classes for each input sample.
We propose a novel framework of unbiased multi-label zero-shot learning, by considering various class-specific regions.
arXiv Detail & Related papers (2022-03-07T15:52:46Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Zero-Shot Federated Learning with New Classes for Audio Classification [0.7106986689736827]
Federated learning is an effective way of extracting insights from different user devices.
New classes with completely unseen data distributions can stream across any device in a federated learning setting.
We propose a unified zero-shot framework to handle these aforementioned challenges during federated learning.
arXiv Detail & Related papers (2021-06-18T09:32:19Z) - PLM: Partial Label Masking for Imbalanced Multi-label Classification [59.68444804243782]
Neural networks trained on real-world datasets with long-tailed label distributions are biased towards frequent classes and perform poorly on infrequent classes.
We propose a method, Partial Label Masking (PLM), which utilizes this ratio during training.
Our method achieves strong performance when compared to existing methods on both multi-label (MultiMNIST and MSCOCO) and single-label (imbalanced CIFAR-10 and CIFAR-100) image classification datasets.
arXiv Detail & Related papers (2021-05-22T18:07:56Z) - CLASTER: Clustering with Reinforcement Learning for Zero-Shot Action
Recognition [52.66360172784038]
We propose a clustering-based model, which considers all training samples at once, instead of optimizing for each instance individually.
We call the proposed method CLASTER and observe that it consistently improves over the state-of-the-art in all standard datasets.
arXiv Detail & Related papers (2021-01-18T12:46:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.