On the link between generative semi-supervised learning and generative
open-set recognition
- URL: http://arxiv.org/abs/2303.11702v4
- Date: Wed, 23 Aug 2023 14:18:30 GMT
- Title: On the link between generative semi-supervised learning and generative
open-set recognition
- Authors: Emile Reyn Engelbrecht, Johan du Preez
- Abstract summary: We investigate the relationship between semi-supervised learning (SSL) and open-set recognition (OSR) under the context of generative adversarial networks (GANs)
Our results find that SSL-GANs achieve near identical results to OSR-GANs, proving the SSL-OSR link.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study investigates the relationship between semi-supervised learning
(SSL, which is training off partially labelled datasets) and open-set
recognition (OSR, which is classification with simultaneous novelty detection)
under the context of generative adversarial networks (GANs). Although no
previous study has formally linked SSL and OSR, their respective methods share
striking similarities. Specifically, SSL-GANs and OSR-GANs require their
generators to produce 'bad-looking' samples which are used to regularise their
classifier networks. We hypothesise that the definitions of bad-looking samples
in SSL and OSR represents the same concept and realises the same goal. More
formally, bad-looking samples lie in the complementary space, which is the area
between and around the boundaries of the labelled categories within the
classifier's embedding space. By regularising a classifier with samples in the
complementary space, classifiers achieve improved generalisation for SSL and
also generalise the open space for OSR. To test this hypothesis, we compare a
foundational SSL-GAN with the state-of-the-art OSR-GAN under the same SSL-OSR
experimental conditions. Our results find that SSL-GANs achieve near identical
results to OSR-GANs, proving the SSL-OSR link. Subsequently, to further this
new research path, we compare several SSL-GANs various SSL-OSR setups which
this first benchmark results. A combined framework of SSL-OSR certainly
improves the practicality and cost-efficiency of classifier training, and so
further theoretical and application studies are also discussed.
Related papers
- Self-Supervised Anomaly Detection in the Wild: Favor Joint Embeddings Methods [12.277762115388187]
Self-Supervised Learning (SSL) offers a promising approach by learning robust representations from unlabeled data.
This paper provides a comprehensive evaluation of SSL methods for real-world anomaly detection, focusing on sewer infrastructure.
arXiv Detail & Related papers (2024-10-05T21:27:47Z) - On the Discriminability of Self-Supervised Representation Learning [38.598160031349686]
Self-supervised learning (SSL) has recently achieved significant success in downstream visual tasks.
A notable gap still exists between SSL and supervised learning (SL), especially in complex downstream tasks.
arXiv Detail & Related papers (2024-07-18T14:18:03Z) - Learning with Noisy Labels Using Collaborative Sample Selection and
Contrastive Semi-Supervised Learning [76.00798972439004]
Collaborative Sample Selection (CSS) removes noisy samples from identified clean set.
We introduce a co-training mechanism with a contrastive loss in semi-supervised learning.
arXiv Detail & Related papers (2023-10-24T05:37:20Z) - Learning Adversarial Semantic Embeddings for Zero-Shot Recognition in
Open Worlds [25.132219723741024]
Zero-Shot Learning (ZSL) focuses on classifying samples of unseen classes with only their side semantic information presented during training.
"Zero-Shot Open-Set Recognition" (ZS-OSR) is required to accurately classify samples from the unseen classes while rejecting samples from the unknown classes during inference.
We introduce a novel approach specifically designed for ZS-OSR, in which our model learns to generate adversarial semantic embeddings of the unknown classes to train an unknowns-informed ZS-OSR.
arXiv Detail & Related papers (2023-07-07T06:54:21Z) - OpenLDN: Learning to Discover Novel Classes for Open-World
Semi-Supervised Learning [110.40285771431687]
Semi-supervised learning (SSL) is one of the dominant approaches to address the annotation bottleneck of supervised learning.
Recent SSL methods can effectively leverage a large repository of unlabeled data to improve performance while relying on a small set of labeled data.
This work introduces OpenLDN that utilizes a pairwise similarity loss to discover novel classes.
arXiv Detail & Related papers (2022-07-05T18:51:05Z) - Collaborative Intelligence Orchestration: Inconsistency-Based Fusion of
Semi-Supervised Learning and Active Learning [60.26659373318915]
Active learning (AL) and semi-supervised learning (SSL) are two effective, but often isolated, means to alleviate the data-hungry problem.
We propose an innovative Inconsistency-based virtual aDvErial algorithm to further investigate SSL-AL's potential superiority.
Two real-world case studies visualize the practical industrial value of applying and deploying the proposed data sampling algorithm.
arXiv Detail & Related papers (2022-06-07T13:28:43Z) - Boosting Discriminative Visual Representation Learning with
Scenario-Agnostic Mixup [54.09898347820941]
We propose textbfScenario-textbfAgnostic textbfMixup (SAMix) for both Self-supervised Learning (SSL) and supervised learning (SL) scenarios.
Specifically, we hypothesize and verify the objective function of mixup generation as optimizing local smoothness between two mixed classes.
A label-free generation sub-network is designed, which effectively provides non-trivial mixup samples and improves transferable abilities.
arXiv Detail & Related papers (2021-11-30T14:49:59Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Self-Supervised Learning of Graph Neural Networks: A Unified Review [50.71341657322391]
Self-supervised learning is emerging as a new paradigm for making use of large amounts of unlabeled samples.
We provide a unified review of different ways of training graph neural networks (GNNs) using SSL.
Our treatment of SSL methods for GNNs sheds light on the similarities and differences of various methods, setting the stage for developing new methods and algorithms.
arXiv Detail & Related papers (2021-02-22T03:43:45Z) - Consistency Regularization with Generative Adversarial Networks for
Semi-Supervised Learning [2.9707483702447783]
Adversarial Adversarial Networks (GANs) based semi-supervised learning (SSL) approaches are shown to improve classification performance by utilizing a large number of unlabeled samples.
However, their performance still lags behind the state-of-the-art non-GAN based SSL approaches.
We identify that the main reason for this is the lack of consistency in class probability predictions on the same image under local perturbations.
arXiv Detail & Related papers (2020-07-08T01:47:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.