Federated Learning without Full Labels: A Survey
- URL: http://arxiv.org/abs/2303.14453v1
- Date: Sat, 25 Mar 2023 12:13:31 GMT
- Title: Federated Learning without Full Labels: A Survey
- Authors: Yilun Jin, Yang Liu, Kai Chen, Qiang Yang
- Abstract summary: We present a survey of methods that combine federated learning with semi-supervised learning, self-supervised learning, and transfer learning methods.
We also summarize the datasets used to evaluate FL methods without full labels.
- Score: 23.49131075675469
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data privacy has become an increasingly important concern in real-world big
data applications such as machine learning. To address the problem, federated
learning (FL) has been a promising solution to building effective machine
learning models from decentralized and private data. Existing federated
learning algorithms mainly tackle the supervised learning problem, where data
are assumed to be fully labeled. However, in practice, fully labeled data is
often hard to obtain, as the participants may not have sufficient domain
expertise, or they lack the motivation and tools to label data. Therefore, the
problem of federated learning without full labels is important in real-world FL
applications. In this paper, we discuss how the problem can be solved with
machine learning techniques that leverage unlabeled data. We present a survey
of methods that combine FL with semi-supervised learning, self-supervised
learning, and transfer learning methods. We also summarize the datasets used to
evaluate FL methods without full labels. Finally, we highlight future
directions in the context of FL without full labels.
Related papers
- Contrastive Federated Learning with Tabular Data Silos [9.516897428263146]
We propose Contrastive Federated Learning with Data Silos (CFL) as a solution for learning from data silos.
CFL outperforms current methods in addressing these challenges and providing improvements in accuracy.
We present positive results that showcase the advantages of our contrastive federated learning approach in complex client environments.
arXiv Detail & Related papers (2024-09-10T00:24:59Z) - TOFU: A Task of Fictitious Unlearning for LLMs [99.92305790945507]
Large language models trained on massive corpora of data from the web can reproduce sensitive or private data raising both legal and ethical concerns.
Unlearning, or tuning models to forget information present in their training data, provides us with a way to protect private data after training.
We present TOFU, a benchmark aimed at helping deepen our understanding of unlearning.
arXiv Detail & Related papers (2024-01-11T18:57:12Z) - FlexSSL : A Generic and Efficient Framework for Semi-Supervised Learning [19.774959310191623]
We develop a generic and efficient learning framework called FlexSSL.
We show that FlexSSL can consistently enhance the performance of semi-supervised learning algorithms.
arXiv Detail & Related papers (2023-12-28T08:31:56Z) - FlatMatch: Bridging Labeled Data and Unlabeled Data with Cross-Sharpness
for Semi-Supervised Learning [73.13448439554497]
Semi-Supervised Learning (SSL) has been an effective way to leverage abundant unlabeled data with extremely scarce labeled data.
Most SSL methods are commonly based on instance-wise consistency between different data transformations.
We propose FlatMatch which minimizes a cross-sharpness measure to ensure consistent learning performance between the two datasets.
arXiv Detail & Related papers (2023-10-25T06:57:59Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Understanding the World Through Action [91.3755431537592]
I will argue that a general, principled, and powerful framework for utilizing unlabeled data can be derived from reinforcement learning.
I will discuss how such a procedure is more closely aligned with potential downstream tasks.
arXiv Detail & Related papers (2021-10-24T22:33:52Z) - FedSEAL: Semi-Supervised Federated Learning with Self-Ensemble Learning
and Negative Learning [7.771967424619346]
Federated learning (FL) is a popular decentralized and privacy-preserving machine learning (FL) framework.
In this paper, we propose a new FL algorithm, called FedSEAL, to solve this Semi-Supervised Federated Learning (SSFL) problem.
Our algorithm utilizes self-ensemble learning and complementary negative learning to enhance both the accuracy and the efficiency of clients' unsupervised learning on unlabeled data.
arXiv Detail & Related papers (2021-10-15T03:03:23Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z) - Towards Utilizing Unlabeled Data in Federated Learning: A Survey and
Prospective [18.40606952418594]
Federated Learning (FL) proposed in recent years has received significant attention from researchers.
In most applications of FL, such as keyboard prediction, labeling data requires virtually no additional efforts.
We identify the need to exploit unlabeled data in FL, and survey possible research fields that can contribute to the goal.
arXiv Detail & Related papers (2020-02-26T14:56:52Z) - Exploiting Unlabeled Data in Smart Cities using Federated Learning [2.362412515574206]
Federated learning is an effective technique to avoid privacy infringement as well as increase the utilization of the data.
We propose a semi-supervised federated learning method called FedSem that exploits unlabeled data.
We show that FedSem can improve accuracy up to 8% by utilizing the unlabeled data in the learning process.
arXiv Detail & Related papers (2020-01-10T13:25:34Z) - Leveraging Semi-Supervised Learning for Fairness using Neural Networks [49.604038072384995]
There has been a growing concern about the fairness of decision-making systems based on machine learning.
In this paper, we propose a semi-supervised algorithm using neural networks benefiting from unlabeled data.
The proposed model, called SSFair, exploits the information in the unlabeled data to mitigate the bias in the training data.
arXiv Detail & Related papers (2019-12-31T09:11:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.