Synthetic Unknown Class Learning for Learning Unknowns
- URL: http://arxiv.org/abs/2111.08062v1
- Date: Mon, 15 Nov 2021 19:46:41 GMT
- Title: Synthetic Unknown Class Learning for Learning Unknowns
- Authors: Jaeyeon Jang
- Abstract summary: This paper proposes a novel synthetic unknown class learning method.
It generates unknown-like samples while maintaining diversity between the generated samples and learns these samples.
Experiments on several benchmark datasets show that the proposed method significantly outperforms other state-of-the-art approaches.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the open set recognition (OSR) problem, where the goal
is to correctly classify samples of known classes while detecting unknown
samples to reject. In the OSR problem, "unknown" is assumed to have infinite
possibilities because we have no knowledge about unknowns until they emerge.
Intuitively, the more an OSR system explores the possibilities of unknowns, the
more likely it is to detect unknowns. Thus, this paper proposes a novel
synthetic unknown class learning method that generates unknown-like samples
while maintaining diversity between the generated samples and learns these
samples. In addition to this unknown sample generation process, knowledge
distillation is introduced to provide room for learning synthetic unknowns. By
learning the unknown-like samples and known samples in an alternating manner,
the proposed method can not only experience diverse synthetic unknowns but also
reduce overgeneralization with respect to known classes. Experiments on several
benchmark datasets show that the proposed method significantly outperforms
other state-of-the-art approaches. It is also shown that realistic unknown
digits can be generated and learned via the proposed method after training on
the MNIST dataset.
Related papers
- Unveiling the Unknown: Conditional Evidence Decoupling for Unknown Rejection [8.78242987271299]
We focus on training an open-set object detector under the condition of scarce training samples.
Under this challenging scenario, the decision boundaries of unknowns are difficult to learn and often ambiguous.
We develop a novel open-set object detection framework, which delves into conditional evidence decoupling for the unknown rejection.
arXiv Detail & Related papers (2024-06-26T15:48:24Z) - Exploring Diverse Representations for Open Set Recognition [51.39557024591446]
Open set recognition (OSR) requires the model to classify samples that belong to closed sets while rejecting unknown samples during test.
Currently, generative models often perform better than discriminative models in OSR.
We propose a new model, namely Multi-Expert Diverse Attention Fusion (MEDAF), that learns diverse representations in a discriminative way.
arXiv Detail & Related papers (2024-01-12T11:40:22Z) - The Devil is in the Wrongly-classified Samples: Towards Unified Open-set
Recognition [61.28722817272917]
Open-set Recognition (OSR) aims to identify test samples whose classes are not seen during the training process.
Recently, Unified Open-set Recognition (UOSR) has been proposed to reject not only unknown samples but also known but wrongly classified samples.
arXiv Detail & Related papers (2023-02-08T11:34:04Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Learning to Imagine: Diversify Memory for Incremental Learning using
Unlabeled Data [69.30452751012568]
We develop a learnable feature generator to diversify exemplars by adaptively generating diverse counterparts of exemplars.
We introduce semantic contrastive learning to enforce the generated samples to be semantic consistent with exemplars.
Our method does not bring any extra inference cost and outperforms state-of-the-art methods on two benchmarks.
arXiv Detail & Related papers (2022-04-19T15:15:18Z) - Non-Exhaustive Learning Using Gaussian Mixture Generative Adversarial
Networks [3.040775019394542]
We propose a new online non-exhaustive learning model, namely, Non-Exhaustive Gaussian Mixture Generative Adversarial Networks (NE-GM-GAN)
Our proposed model synthesizes latent representation over a deep generative model, such as GAN, for incremental detection of instances of emerging classes in the test data.
arXiv Detail & Related papers (2021-06-28T00:20:22Z) - Teacher-Explorer-Student Learning: A Novel Learning Method for Open Set
Recognition [0.0]
Teacher-explorer-student (T/E/S) learning aims to reject unknown samples while minimizing the loss of classification performance on known samples.
In this novel learning method, overgeneralization of deep learning classifiers is significantly reduced by exploring various possibilities of unknowns.
arXiv Detail & Related papers (2021-03-23T22:32:32Z) - Learning Open Set Network with Discriminative Reciprocal Points [70.28322390023546]
Open set recognition aims to simultaneously classify samples from predefined classes and identify the rest as 'unknown'
In this paper, we propose a new concept, Reciprocal Point, which is the potential representation of the extra-class space corresponding to each known category.
Based on the bounded space constructed by reciprocal points, the risk of unknown is reduced through multi-category interaction.
arXiv Detail & Related papers (2020-10-31T03:20:31Z) - Open Set Recognition with Conditional Probabilistic Generative Models [51.40872765917125]
We propose Conditional Probabilistic Generative Models (CPGM) for open set recognition.
CPGM can detect unknown samples but also classify known classes by forcing different latent features to approximate conditional Gaussian distributions.
Experiment results on multiple benchmark datasets reveal that the proposed method significantly outperforms the baselines.
arXiv Detail & Related papers (2020-08-12T06:23:49Z) - Conditional Gaussian Distribution Learning for Open Set Recognition [10.90687687505665]
We propose Conditional Gaussian Distribution Learning (CGDL) for open set recognition.
In addition to detecting unknown samples, this method can also classify known samples by forcing different latent features to approximate different Gaussian models.
Experiments on several standard image reveal that the proposed method significantly outperforms the baseline method and achieves new state-of-the-art results.
arXiv Detail & Related papers (2020-03-19T14:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.