FlexSSL : A Generic and Efficient Framework for Semi-Supervised Learning
- URL: http://arxiv.org/abs/2312.16892v1
- Date: Thu, 28 Dec 2023 08:31:56 GMT
- Title: FlexSSL : A Generic and Efficient Framework for Semi-Supervised Learning
- Authors: Huiling Qin, Xianyuan Zhan, Yuanxun Li, Yu Zheng
- Abstract summary: We develop a generic and efficient learning framework called FlexSSL.
We show that FlexSSL can consistently enhance the performance of semi-supervised learning algorithms.
- Score: 19.774959310191623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semi-supervised learning holds great promise for many real-world
applications, due to its ability to leverage both unlabeled and expensive
labeled data. However, most semi-supervised learning algorithms still heavily
rely on the limited labeled data to infer and utilize the hidden information
from unlabeled data. We note that any semi-supervised learning task under the
self-training paradigm also hides an auxiliary task of discriminating label
observability. Jointly solving these two tasks allows full utilization of
information from both labeled and unlabeled data, thus alleviating the problem
of over-reliance on labeled data. This naturally leads to a new generic and
efficient learning framework without the reliance on any domain-specific
information, which we call FlexSSL. The key idea of FlexSSL is to construct a
semi-cooperative "game", which forges cooperation between a main
self-interested semi-supervised learning task and a companion task that infers
label observability to facilitate main task training. We show with theoretical
derivation of its connection to loss re-weighting on noisy labels. Through
evaluations on a diverse range of tasks, we demonstrate that FlexSSL can
consistently enhance the performance of semi-supervised learning algorithms.
Related papers
- OwMatch: Conditional Self-Labeling with Consistency for Open-World Semi-Supervised Learning [4.462726364160216]
Semi-supervised learning (SSL) offers a robust framework for harnessing the potential of unannotated data.
The emergence of open-world SSL (OwSSL) introduces a more practical challenge, wherein unlabeled data may encompass samples from unseen classes.
We propose an effective framework called OwMatch, combining conditional self-labeling and open-world hierarchical thresholding.
arXiv Detail & Related papers (2024-11-04T06:07:43Z) - Semi-Supervised One-Shot Imitation Learning [83.94646047695412]
One-shot Imitation Learning aims to imbue AI agents with the ability to learn a new task from a single demonstration.
We introduce the semi-supervised OSIL problem setting, where the learning agent is presented with a large dataset of trajectories.
We develop an algorithm specifically applicable to this semi-supervised OSIL setting.
arXiv Detail & Related papers (2024-08-09T18:11:26Z) - Active Self-Supervised Learning: A Few Low-Cost Relationships Are All
You Need [34.013568381942775]
Self-Supervised Learning (SSL) has emerged as the solution of choice to learn transferable representations from unlabeled data.
In this work, we formalize and generalize this principle through Positive Active Learning (PAL) where an oracle queries semantic relationships between samples.
First, it unveils a theoretically grounded learning framework beyond SSL, based on similarity graphs, that can be extended to tackle supervised and semi-supervised learning depending on the employed oracle.
Second, it provides a consistent algorithm to embed a priori knowledge, e.g. some observed labels, into any SSL losses without any change in the training pipeline.
arXiv Detail & Related papers (2023-03-27T14:44:39Z) - Federated Learning without Full Labels: A Survey [23.49131075675469]
We present a survey of methods that combine federated learning with semi-supervised learning, self-supervised learning, and transfer learning methods.
We also summarize the datasets used to evaluate FL methods without full labels.
arXiv Detail & Related papers (2023-03-25T12:13:31Z) - Does Decentralized Learning with Non-IID Unlabeled Data Benefit from
Self Supervision? [51.00034621304361]
We study decentralized learning with unlabeled data through the lens of self-supervised learning (SSL)
We study the effectiveness of contrastive learning algorithms under decentralized learning settings.
arXiv Detail & Related papers (2022-10-20T01:32:41Z) - Collaborative Intelligence Orchestration: Inconsistency-Based Fusion of
Semi-Supervised Learning and Active Learning [60.26659373318915]
Active learning (AL) and semi-supervised learning (SSL) are two effective, but often isolated, means to alleviate the data-hungry problem.
We propose an innovative Inconsistency-based virtual aDvErial algorithm to further investigate SSL-AL's potential superiority.
Two real-world case studies visualize the practical industrial value of applying and deploying the proposed data sampling algorithm.
arXiv Detail & Related papers (2022-06-07T13:28:43Z) - The Role of Global Labels in Few-Shot Classification and How to Infer
Them [55.64429518100676]
Few-shot learning is a central problem in meta-learning, where learners must quickly adapt to new tasks.
We propose Meta Label Learning (MeLa), a novel algorithm that infers global labels and obtains robust few-shot models via standard classification.
arXiv Detail & Related papers (2021-08-09T14:07:46Z) - Self-Tuning for Data-Efficient Deep Learning [75.34320911480008]
Self-Tuning is a novel approach to enable data-efficient deep learning.
It unifies the exploration of labeled and unlabeled data and the transfer of a pre-trained model.
It outperforms its SSL and TL counterparts on five tasks by sharp margins.
arXiv Detail & Related papers (2021-02-25T14:56:19Z) - Boosting the Performance of Semi-Supervised Learning with Unsupervised
Clustering [10.033658645311188]
We show that ignoring labels altogether for whole epochs intermittently during training can significantly improve performance in the small sample regime.
We demonstrate our method's efficacy in boosting several state-of-the-art SSL algorithms.
arXiv Detail & Related papers (2020-12-01T14:19:14Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.