FedIL: Federated Incremental Learning from Decentralized Unlabeled Data
with Convergence Analysis
- URL: http://arxiv.org/abs/2302.11823v1
- Date: Thu, 23 Feb 2023 07:12:12 GMT
- Title: FedIL: Federated Incremental Learning from Decentralized Unlabeled Data
with Convergence Analysis
- Authors: Nan Yang, Dong Yuan, Charles Z Liu, Yongkun Deng and Wei Bao
- Abstract summary: This work considers the server with a small labeled dataset and intends to use unlabeled data in multiple clients for semi-supervised learning.
We propose a new framework with a generalized model, Federated Incremental Learning (FedIL), to address the problem of how to utilize labeled data in the server and unlabeled data in clients separately.
- Score: 23.70951896315126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing federated learning methods assume that clients have fully
labeled data to train on, while in reality, it is hard for the clients to get
task-specific labels due to users' privacy concerns, high labeling costs, or
lack of expertise. This work considers the server with a small labeled dataset
and intends to use unlabeled data in multiple clients for semi-supervised
learning. We propose a new framework with a generalized model, Federated
Incremental Learning (FedIL), to address the problem of how to utilize labeled
data in the server and unlabeled data in clients separately in the scenario of
Federated Learning (FL). FedIL uses the Iterative Similarity Fusion to enforce
the server-client consistency on the predictions of unlabeled data and uses
incremental confidence to establish a credible pseudo-label set in each client.
We show that FedIL will accelerate model convergence by Cosine Similarity with
normalization, proved by Banach Fixed Point Theorem. The code is available at
https://anonymous.4open.science/r/fedil.
Related papers
- Federated Learning with Label-Masking Distillation [33.80340338038264]
Federated learning provides a privacy-preserving manner to collaboratively train models on data distributed over multiple local clients.
Due to the different user behavior of the client, label distributions between different clients are significantly different.
We propose a label-masking distillation approach termed FedLMD to facilitate federated learning via perceiving the various label distributions of each client.
arXiv Detail & Related papers (2024-09-20T00:46:04Z) - Federated Learning with Only Positive Labels by Exploring Label Correlations [78.59613150221597]
Federated learning aims to collaboratively learn a model by using the data from multiple users under privacy constraints.
In this paper, we study the multi-label classification problem under the federated learning setting.
We propose a novel and generic method termed Federated Averaging by exploring Label Correlations (FedALC)
arXiv Detail & Related papers (2024-04-24T02:22:50Z) - FedAnchor: Enhancing Federated Semi-Supervised Learning with Label
Contrastive Loss for Unlabeled Clients [19.3885479917635]
Federated learning (FL) is a distributed learning paradigm that facilitates collaborative training of a shared global model across devices.
We propose FedAnchor, an innovative FSSL method that introduces a unique double-head structure, called anchor head, paired with the classification head trained exclusively on labeled anchor data on the server.
Our approach mitigates the confirmation bias and overfitting issues associated with pseudo-labeling techniques based on high-confidence model prediction samples.
arXiv Detail & Related papers (2024-02-15T18:48:21Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Learning from Only Unlabeled Data with
Class-Conditional-Sharing Clients [98.22390453672499]
Supervised federated learning (FL) enables multiple clients to share the trained model without sharing their labeled data.
We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients.
arXiv Detail & Related papers (2022-04-07T09:12:00Z) - Trustable Co-label Learning from Multiple Noisy Annotators [68.59187658490804]
Supervised deep learning depends on massive accurately annotated examples.
A typical alternative is learning from multiple noisy annotators.
This paper proposes a data-efficient approach, called emphTrustable Co-label Learning (TCL)
arXiv Detail & Related papers (2022-03-08T16:57:00Z) - SemiFed: Semi-supervised Federated Learning with Consistency and
Pseudo-Labeling [14.737638416823772]
Federated learning enables multiple clients, such as mobile phones and organizations, to collaboratively learn a shared model for prediction.
In this work, we focus on a new scenario for cross-silo federated learning, where data samples of each client are partially labeled.
We propose a new framework dubbed SemiFed that unifies two dominant approaches for semi-supervised learning: consistency regularization and pseudo-labeling.
arXiv Detail & Related papers (2021-08-21T01:14:27Z) - Federated Unsupervised Representation Learning [56.715917111878106]
We formulate a new problem in federated learning called Federated Unsupervised Representation Learning (FURL) to learn a common representation model without supervision.
FedCA is composed of two key modules: dictionary module to aggregate the representations of samples from each client and share with all clients for consistency of representation space and alignment module to align the representation of each client on a base model trained on a public data.
arXiv Detail & Related papers (2020-10-18T13:28:30Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.