Federated Learning with Only Positive Labels by Exploring Label Correlations
- URL: http://arxiv.org/abs/2404.15598v1
- Date: Wed, 24 Apr 2024 02:22:50 GMT
- Title: Federated Learning with Only Positive Labels by Exploring Label Correlations
- Authors: Xuming An, Dui Wang, Li Shen, Yong Luo, Han Hu, Bo Du, Yonggang Wen, Dacheng Tao,
- Abstract summary: Federated learning aims to collaboratively learn a model by using the data from multiple users under privacy constraints.
In this paper, we study the multi-label classification problem under the federated learning setting.
We propose a novel and generic method termed Federated Averaging by exploring Label Correlations (FedALC)
- Score: 78.59613150221597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning aims to collaboratively learn a model by using the data from multiple users under privacy constraints. In this paper, we study the multi-label classification problem under the federated learning setting, where trivial solution and extremely poor performance may be obtained, especially when only positive data w.r.t. a single class label are provided for each client. This issue can be addressed by adding a specially designed regularizer on the server-side. Although effective sometimes, the label correlations are simply ignored and thus sub-optimal performance may be obtained. Besides, it is expensive and unsafe to exchange user's private embeddings between server and clients frequently, especially when training model in the contrastive way. To remedy these drawbacks, we propose a novel and generic method termed Federated Averaging by exploring Label Correlations (FedALC). Specifically, FedALC estimates the label correlations in the class embedding learning for different label pairs and utilizes it to improve the model training. To further improve the safety and also reduce the communication overhead, we propose a variant to learn fixed class embedding for each client, so that the server and clients only need to exchange class embeddings once. Extensive experiments on multiple popular datasets demonstrate that our FedALC can significantly outperform existing counterparts.
Related papers
- Overcoming label shift in targeted federated learning [8.223143536605248]
Federated learning enables multiple actors to collaboratively train models without sharing private data.
One common violation is label shift, where the label distributions differ across clients or between clients and the target domain.
We propose FedPALS, a novel model aggregation scheme that adapts to label shifts by leveraging knowledge of the target label distribution at the central server.
arXiv Detail & Related papers (2024-11-06T09:52:45Z) - Federated Learning with Label-Masking Distillation [33.80340338038264]
Federated learning provides a privacy-preserving manner to collaboratively train models on data distributed over multiple local clients.
Due to the different user behavior of the client, label distributions between different clients are significantly different.
We propose a label-masking distillation approach termed FedLMD to facilitate federated learning via perceiving the various label distributions of each client.
arXiv Detail & Related papers (2024-09-20T00:46:04Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - FedIL: Federated Incremental Learning from Decentralized Unlabeled Data
with Convergence Analysis [23.70951896315126]
This work considers the server with a small labeled dataset and intends to use unlabeled data in multiple clients for semi-supervised learning.
We propose a new framework with a generalized model, Federated Incremental Learning (FedIL), to address the problem of how to utilize labeled data in the server and unlabeled data in clients separately.
arXiv Detail & Related papers (2023-02-23T07:12:12Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Trustable Co-label Learning from Multiple Noisy Annotators [68.59187658490804]
Supervised deep learning depends on massive accurately annotated examples.
A typical alternative is learning from multiple noisy annotators.
This paper proposes a data-efficient approach, called emphTrustable Co-label Learning (TCL)
arXiv Detail & Related papers (2022-03-08T16:57:00Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z) - Federated Learning with Only Positive Labels [71.63836379169315]
We propose a generic framework for training with only positive labels, namely Federated Averaging with Spreadout (FedAwS)
We show, both theoretically and empirically, that FedAwS can almost match the performance of conventional learning where users have access to negative labels.
arXiv Detail & Related papers (2020-04-21T23:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.