FedSEAL: Semi-Supervised Federated Learning with Self-Ensemble Learning
and Negative Learning
- URL: http://arxiv.org/abs/2110.07829v1
- Date: Fri, 15 Oct 2021 03:03:23 GMT
- Title: FedSEAL: Semi-Supervised Federated Learning with Self-Ensemble Learning
and Negative Learning
- Authors: Jieming Bian, Zhu Fu, Jie Xu
- Abstract summary: Federated learning (FL) is a popular decentralized and privacy-preserving machine learning (FL) framework.
In this paper, we propose a new FL algorithm, called FedSEAL, to solve this Semi-Supervised Federated Learning (SSFL) problem.
Our algorithm utilizes self-ensemble learning and complementary negative learning to enhance both the accuracy and the efficiency of clients' unsupervised learning on unlabeled data.
- Score: 7.771967424619346
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL), a popular decentralized and privacy-preserving
machine learning (FL) framework, has received extensive research attention in
recent years. The majority of existing works focus on supervised learning (SL)
problems where it is assumed that clients carry labeled datasets while the
server has no data. However, in realistic scenarios, clients are often unable
to label their data due to the lack of expertise and motivation while the
server may host a small amount of labeled data. How to reasonably utilize the
server labeled data and the clients' unlabeled data is thus of paramount
practical importance. In this paper, we propose a new FL algorithm, called
FedSEAL, to solve this Semi-Supervised Federated Learning (SSFL) problem. Our
algorithm utilizes self-ensemble learning and complementary negative learning
to enhance both the accuracy and the efficiency of clients' unsupervised
learning on unlabeled data, and orchestrates the model training on both the
server side and the clients' side. Our experimental results on Fashion-MNIST
and CIFAR10 datasets in the SSFL setting validate the effectiveness of our
method, which outperforms the state-of-the-art SSFL methods by a large margin.
Related papers
- (FL)$^2$: Overcoming Few Labels in Federated Semi-Supervised Learning [4.803231218533992]
Federated Learning (FL) is a distributed machine learning framework that trains accurate global models while preserving clients' privacy-sensitive data.
Most FL approaches assume that clients possess labeled data, which is often not the case in practice.
We propose $(FL)2$, a robust training method for unlabeled clients using sharpness-aware consistency regularization.
arXiv Detail & Related papers (2024-10-30T17:15:02Z) - TPFL: Tsetlin-Personalized Federated Learning with Confidence-Based Clustering [0.0]
We propose a novel approach called Tsetlin-Personalized Federated Learning.
In this way, models are grouped into clusters based on their confidence towards a specific class.
Clients share only what they are confident about, resulting in the elimination of wrongful weight aggregation.
Results demonstrated that TPFL performance better than baseline methods with 98.94% accuracy on MNIST, 98.52% accuracy on FashionMNIST and 91.16% accuracy on FEMNIST dataset.
arXiv Detail & Related papers (2024-09-16T15:27:35Z) - SemiSFL: Split Federated Learning on Unlabeled and Non-IID Data [34.49090830845118]
Federated Learning (FL) has emerged to allow multiple clients to collaboratively train machine learning models on their private data at the network edge.
We propose a novel Semi-supervised SFL system, termed SemiSFL, which incorporates clustering regularization to perform SFL with unlabeled and non-IID client data.
Our system provides a 3.8x speed-up in training time, reduces the communication cost by about 70.3% while reaching the target accuracy, and achieves up to 5.8% improvement in accuracy under non-IID scenarios.
arXiv Detail & Related papers (2023-07-29T02:35:37Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - SemiFL: Communication Efficient Semi-Supervised Federated Learning with
Unlabeled Clients [34.24028216079336]
We propose a new Federated Learning framework referred to as SemiFL.
In SemiFL, clients have completely unlabeled data, while the server has a small amount of labeled data.
We demonstrate various efficient strategies of SemiFL that enhance learning performance.
arXiv Detail & Related papers (2021-06-02T19:22:26Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z) - Leveraging Semi-Supervised Learning for Fairness using Neural Networks [49.604038072384995]
There has been a growing concern about the fairness of decision-making systems based on machine learning.
In this paper, we propose a semi-supervised algorithm using neural networks benefiting from unlabeled data.
The proposed model, called SSFair, exploits the information in the unlabeled data to mitigate the bias in the training data.
arXiv Detail & Related papers (2019-12-31T09:11:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.